When HAProxy binds to the IPv6 socket, it can handle both IPv6 and IPv4
clients. IPv4 clients are handled as IPv4-mapped IPv6 addresses, which
some backends (i.e. Apache) cannot support. To avoid this, we configure
HAProxy to bind to the IPv4 and IPv6 sockets separately, so that IPv4
addresses are handled as IPv4 addresses.
Expose a virtual host on a separate TCP port that uses the PROXY
protocol. This way, HAProxy can pass the original client IP address to
Jellyfin without terminating the TLS connection.
In order to enable authentication using LDAP over TLS in Jellyfin, we
need to expose the CA certificate that issues the LDAP server
certificates to the container.
The MinIO server for backups has special requirements for HTTPS. I want
to use subdomains for bucket names, so the certificate must have a
wildcard name, which requires using the DNS-01 challenge. Fortunately,
it is actually pretty easy to use `nsupdate` with GSS-TSIG
authentication to automate DNS record creation, and by default, all
domain-member machines can create any records. Thus, using the `manual`
auth plugin for `certbot` and a script to run `nsupdate`, obtaining the
wildcard certificate is fairly straightforward.
The biggest issue I encountered while developing this feature was
caching of NXDOMAIN responses. There doesn't seem to be a way to change
the TTL of the SOA record of the Active Directory DNS domain, which
defaults to 3600, meaning NXDOMAIN responses are always cached for an
hour. When adding a record using `nsupdate -g`, the tool always
performs a SOA lookup of new name to find the target zone for it. Since
the name does not exist yet, the domain controller responds with
NXDOMAIN, which gets cached by the main DNS server. Thus, even after
adding the record, the ACME server will not be able to resolve the
name for up to an hour. We can a void this by explicitly setting the
target zone. That would not work in a multi-domain forest, but
fortunately, we do not have to worry about that.
This role borrows some logic from the *postgresql-cert* role.
Eventually, I probably want to combine some of the steps from both of
these roles, possibly replacing the old *certbot* role.
The *minio-nginx* role configures nginx to proxy for MinIO. It uses the
"subdomain" pattern, as described in [Configure NGINX Proxy for MinIO
Server][0]; the S3 API and the console UI are accessible through
different domain names.
[0]: https://min.io/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html
Modern versions of Podman use Netavark, which needs to write various
files on the host file system (even when the container uses the
host's network namespace).
If the `minio_address` variable is specified, it will be passed with the
`--address` argument to `minio server`. This allows controlling the
socket the server binds to and listens on.
The `minio_browser_redirect_url` can be specified to populate the
similarly-named environment variable, which configures how MinIO serves
the web UI.
The `minio_domain` variable sets the `MINIO_DOMAIN` environment
variable, which enables DNS names (subdomains) for buckets, i.e.
`{bucket_name}.{MINIO_DOMAIN}`.
`wal-g` needs to connect to the PostgreSQL database system, so it should
run as the _postgres_ user, who has permission to connect, rather than
_root_, who does not.
Gitea needs SMTP configuration in order to send e-mail notifications
about e.g. pull requests. The `gitea_smtp` variable can be defined to
enable this feature.
Gitea complains if the `WORK_DIR` setting is not set. It tries to set
it itself, but fails because the configuration is read-only. The value
it uses is incorrect anyway (`/usr/local/bin`, since that's where the
`gitea` executable is).
I've already made a couple of mistakes keeping the HTTP and HTTPS rules
in sync. Let's define the sites declaratively and derive the HAProxy
rules from the data, rather then manually type the rules.
The *dch-proxy* role has not been used for quite some time. The web
server has been handling the reerse proxy functionality, in addition to
hosting websites. The drawback to using Apache as the reverse proxy,
though, is that it operates in TLS-terminating mode, so it needs to have
the correct certificate for every site and application it proxies for.
This is becoming cumbersome, especially now that there are several sites
that do not use the _pyrocufflink.net_ wildcard certificate. Notably,
Tabitha's _hatchlearningcenter.org_ is problematic because although the
main site are hosted by the web server, the Invoice Ninja client portal
is hosted in Kubernetes.
Switching back to HAProxy to provide the reverse proxy functionality
will eliminate the need to have the server certificate both on the
backend and on the reverse proxy, as it can operate in TLS-passthrough
mode. The main reason I stopped using HAProxy in the first place was
because when using TLS-passthrough mode, the original source IP address
is lost. Fortunately, HAProxy and Apache can both be configured to use
the PROXY protocol, which provides a mechanism for communicating the
original IP address while still passing through the TLS connection
unmodified. This is particularly important for Nextcloud because of its
built-in intrusion prevention; without knowing the actual source IP
address, it blocks _everyone_, since all connections appear to come from
the reverse proxy's IP address.
Combining TLS-passthrough mode with the PROXY protocol resolves both the
certificate management issue and the source IP address issue.
I've cleaned up the _dch-proxy_ role quite a bit in this commit.
Notably, I consolidated all the backend and frontend definitions into a
single file; it didn't really make sense to have them all separate,
since they were managed by the same role and referred to each other. Of
course, I had to update the backends to match the currently-deployed
applications as well.
The *postfix* role will now generate configuration and a lookup table
for [canonical address mapping][0] of email recipients. To configure
the mapping, the `postfix_recipient_canonical_map` must be a dictionary
of source-target addresses, e.g.:
```yaml
postfix_recipient_canonical_map:
my.bad.email@fake.test: my.real.email@example.com
```
[0]: https://www.postfix.org/ADDRESS_REWRITING_README.html#canonical
If winbind is unable to communicate with any domain controller, the
`pam_winbind.so` module will time out. In _auth_ and _account_ context,
this was not an issue, at least for local users, because other modules
terminated the stack before `pam_winbind.so` was called. In _session_
context, though, nothing terminated the stack at all, so
`pam_winbind.so` was called unconditionally. This prevented even _root_
from logging in on the console. This made troubleshooting difficult,
especially for the VM hosts, when the domain controllers were down.
Deploying Caddy as a reverse proxy for Frigate enables HTTPS with a
certificate issued by the internal CA (via ACME) and authentication via
Authelia.
Separating the installation and base configuratieon of Caddy into its
own role will allow us to reuse that part for other sapplications that
use Caddy for similar reasons.
The *gasket-dkms* package provides the `gasket` and `apex` kernel
modules, which are needed fro the Google Coral Edge TPU. Since these
are out-of-tree modules, they are not allowed in Fedora proper, so they
are provided in a COPR, and have to be rebuilt for every kernel version.
The DKMS framework handles automatically building the modules whenever
the kernel updates.
For systems usign UEFI with SecureBoot enabled, kernel modules must be
signed by a key trusted by the platform. For locally-built modules, we
can use the Machine Owner Key (MOK). Unfortunately, enrolling a new MOK
requires rebooting and manual intervention during the boot process.
Therefore, the *gasket-dkms* role has a `pause` step to ensure someone
is paying attention and able handle the key enrollment interactively.
Eventually, I'd like to have an RPM package with these modules
pre-built, so production servers do not need the kernel development
tools (`perl`, `gcc`, headers, etc.). It will be tricky, though, to
make sure the modules get rebuilt for every kernel version as Fedora
releases them.
* Switch to Quadlet-style `.container` for systemd unit
* Update to new image tag naming scheme (not arch-specific)
* Use environment variables for secrets
* Allow the entire `frigate_config` variable to be overridden
The *useproxy* role configures the `http_proxy` et al. environmet
variables for systemd services and interactive shells. Additionally, it
configures Yum repositories to use a single mirror via the `baseurl`
setting, rather than a list of mirrors via `metalink`, since the proxy
a) the proxy only allows access to _dl.fedoraproject.org_ and b) the
proxy caches RPM files, but this is only effective if all clients use
the same mirror all the time.
The `useproxy.yml` playbook applies this role to servers in the
*needproxy* group.
Although it's rare, sometimes Samba crashes or fails to start. When
this happens, restarting it is almost always enough to get it working
again. Since all sorts of authentication problems can occur if one of
the domain controllers is down, it's probably best to just have systemd
automatically restart _samba.service_ if it ever stops for any reason.
The [postgres-exporter][0] exposes PostgreSQL server statistics to
Prometheus. It connects to a specified PostgreSQL server (in this
case, a server on the local machine via UNIX socket) and collects data
from the `pg_stat_activity`, et al. views. It needs the `pg_monitor`
role in order to be allowed to read the relevant metrics.
Since we're setting up the exporter to connect via UNIX socket, it needs
a dedicated OS user to match the PostgreSQL user in order to
authenticate via the _peer_ method.
[0]: https://github.com/prometheus-community/postgres_exporter/
WAL archives are not much good without a base backup onto which they
can be applied. Thus, we need to schedule WAL-G to create and upload a
backup periodically.
This role installs `wal-g` from the DCH Yum repository, and creates a
configuration file for it in `/etc/postgresql`. Additionally, it
installs a custom SELinux policy module that allows `wal-g` to run in
the `postgresql_t` domain (i.e. when spawned by the PostgreSQL server).
This role can be used to get a server certificate for PostgreSQL from an
ACME CA using `certbot`. It fetches the initial certificate and copies
it to the PostgreSQL configuration directory. It also sets up a
post-renewal hook script that copies updated certificates and reload
the server.
This rewrite brings a lot of improvements and new functionality to the
*postgresql-server* role. The most noticeable change is the
introduction of the `postgresql_config_dir` variable, which can be used
to specify a different location for the PostgreSQL server configuration
files, separate from the data storage directory. By default, the
variable is set to `/etc/postgresql`. For some situations, it may be
necessary to disable this functionality, which can be accomplished by
setting the value of `postgresql_config_dir` to the same path as
`pgdata_dir`. Note also that the `postgresql-setup` tool, and the
corresponding `postgresql-check-db-dir` script, which are included in
the Fedora/Red Hat distribution of PostgreSQL, do not support having
separate configuration and data directories, so their use has to be
disabled.
Another significant improvement is to how the `postgresql.conf` file
is generated. Any setting can be set now, using the `postgresql_config`
variable; any key in this dictionary will be written to the
configuration file. Note that configuration file syntax requires
single quotes around string values, so these have to be included in the
YAML value.
To support deploying standby servers, the role now supports running a
command to restore from a backup instead of running `initdb`.
Additionally, the `postgresql_standby` variable can be set to `true`
to create the `recovery.signal` file, configuring the server as a
standby.
Sending SIGHUP to the main PID (i.e. conmon) ends up stopping the
service. What we really want is to send the signal to main PID _inside_
the container. We can achieve this by using `podman kill` instead of
`kill`.
Without making the firewall changes permanent, when a server tries to
renew its certificate after rebooting, it will fail as the ACME server
cannot connect to the HTTP port.
Sometimes, the `collectd-version` script crashes or fails to start at
boot. Configuring systemd to automatically restart it will ensure that
it's always running, so machines' versions are consistently inventoried.
The `squid.service` systemd unit now correctly initializes the
configured cache directories, so we do not need to do it explicitly
before starting the server.
The *samba-cert* role configures `lego` and HAProxy to obtain an X.509
certificate via the ACME HTTP-01 challenge. HAProxy is necessary
because LDAP server certificates need to have the apex domain in their
SAN field, and the ACME server may contact *any* domain controller
server with an A record for that name. HAProxy will forward the
challenge request on to the first available host on port 5000, where
`lego` is listening to provide validation.
Issuing certificates this way has a couple of advantages:
1. No need for the wildcard certificate for the *pyrocufflink.blue*
domain any more
2. Renewals are automatic and handled by the server itself rather than
Ansible via scheduled Jenkins job
Item (2) is particularly interesting because it avoids the bi-monthly
issue where replacing the LDAP server certificate and restarting Samba
causes the Jenkins job to fail.
Naturally, for this to work correctly, all LDAP client applications
need to trust the certificates issued by the ACME server, in this case
*DCH Root CA R2*.
HAProxy uses a special configuration block, `resolvers`, to specify
how it should look up names in DNS. This configuration is used for
e.g. dynamically discovering backend servers via DNS A or SRV records.
Since resolvers are global, they need to be specified in the global
configuration file, rather than a per-application drop-in.
We will use this functionality for the ACME HTTP-01 challenge solver
for Samba AD domain controllers.
The current version of *haproxy* packaged in Fedora already enables
configuration via fragments in a drop-in directory, though it uses
a different path by default. I still like separating the global
configuration from the defaults, though, and keeping the main
`haproxy.cfg` file empty.
*dnf-automatic* is an add-on for `dnf` that performs scheduled,
automatic updates. It works pretty much how I would want it to:
triggered by a systemd timer, sends email reports upon completion, and
only reboots for kernel et al. updates.
In its default configuration, `dnf-automatic.timer` fires every day. I
want machines to update weekly, but I want them to update on different
days (so as to avoid issues if all the machines reboot at once). Thus,
the _dnf-automatic_ role uses a systemd unit extension to change the
schedule. The day-of-the-week is chosen pseudo-randomly based on the
host name of the managed system.
Even with `Network=host`, Podman tries to write to
`/etc/containers/network` for some reason. Fortunately, it doesn't
actually need to, so we can trick it into working by mounting an empty
*tmpfs* filesystem there.
Even with `Network=host`, Podman tries to write to
`/etc/containers/network` for some reason. Fortunately, it doesn't
actually need to, so we can trick it into working by mounting an empty
*tmpfs* filesystem there.
The summer 2024 enrollment form is more complicated than the other
forms on the HLC site, as it integrates directly with Invoice Ninja. As
such, it's handled by a different backend, which runs in Kubernetes.
The *promtail* service runs as an unprivileged user by default, which is
fine in most cases (i.e. when scraping only the Journal), but may not
always be sufficient to read logs from other files. Rather than run
Promtail as root in these cases, we can assign it the
CAP_DAC_READ_SEARCH capability, which will allow it to read any file,
but does not grant it any of root's other privileges.
To enable this functionality, the `promtail_dac_read_search` Ansible
variable can be set to `true` for a host or group. This will create a
systemd unit configuration extension that configures the service to have
the CAP_DAC_READ_SEARCH capability in its ambient set.
*unifi1.pyrocufflink.blue* requires a proxy to access Yum repositories
on the Internet, so it has the `proxy` setting configured globally. The
proxy does NOT allow access to internal resources, however. The
internal repository is directly accessible by that machine, so it needs
to be configured thus.
The Squid "cache log" is where it writes general debug and error
messages. It is distinct from the "access log," which is where it
writes the status of every proxy request. We already had the latter
configured to go to syslog by default (so it would be captured in the
journal), but missed the former.
Promtail is the log sending client for Grafana Loki. For traditional
Linux systems, an RPM package is available from upstream, making
installation fairly simple. Configuration is stored in a YAML file, so
again, it's straightforward to configure via Ansible variables. Really,
the only interesting step is adding the _promtail_ user, which is
created by the RPM package, to the _systemd-journal_ group, so that
Promtail can read the systemd journal files.