_nginx_ access logs are typically either very small or very large. For
small log files, it's fast enough to decompress them on the fly if
necessary. For large files, they may take up so much space in
uncompressed form that the log volume fills too quickly. In either
case, compressing the files as soon as they are rotated is a good
option, especially since their contents should already be sent to Loki.
The default `logrotate` configuration for _nginx_ may not be appropriate
for high-volume servers. The `nginx_keep_num_logs` variable is now
available to control how many days of logs are kept.
Since `restic` needs to run as root in order to back up files regardless
of their permissions, we need to restrict it to doing only that. Using
systemd sandbox features, especially the capability bounding set, we can
remove all of _root_'s powers except the ability to read all files.
The `restic.yml` playbook applies the _restic_ role to hosts in the
_restic_ group. The _restic_ role installs `restic` and creates a
systemd timer and service unit to run `restic backup` every day.
Restic doesn't really have a configuration file; all its settings are
controlled either by environment variables or command-line options. Some
options, such as the list of files to include in or exclude from
backups, take paths to files containing the values. We can make use of
these to provide some configurability via Ansible variables. The
`restic_env` variable is a map of environment variables and values to
set for `restic`. The `restic_include` and `restic_exclude` variables
are lists of paths/patterns to include and exclude, respectively.
Finally, the `restic_password` variable contains the password to decrypt
the repository contents. The password is written to a file and exposed
to the _restic-backup.service_ unit using [systemd credentials][0].
When using S3 or a compatible service for respository storage, Restic of
course needs authentication credentials. These can be set using the
`restic_aws_credentials` variable. If this variable is defined, it
should be a map containing the`aws_access_key_id` and
`aws_secret_access_key` keys, which will be written to an AWS shared
credentials file. This file is then exposed to the
_restic-backup.service_ unit using [systemd credentials][0].
[0]: https://systemd.io/CREDENTIALS/
It turns out, having the exporter connect to the _template1_ database is
not a great idea. PostgreSQL does not allow creating a new database if
the template database is currently being accessed by any clients. Since
_template1_ is the default choice, the `createdb` command will probably
fail.
It doesn't specifically matter which database the exporter connects to,
since it reads most (all?) of its data from the PostgreSQL catalog,
which isn't database-specific.
Currently, the certificate authority that issues certificates for
PostgreSQL clients is hosted in Kubernetes and managed by
_cert-manager_. Certificates it issues are stored in Kubernetes Secret
resources, making them easy to consume by applications running in the
cluster, but not for anything outside. Since Nextcloud runs on its own
VM, we need a way to get the certificate out of the Secret and into a
file on that machine. To that end, I've written the
`nextcloud-fetch-cert.py` script. This script uses a Kubernetes Service
Account token to authenticate to the Kubernetes API and download the
contents of the Secret. It runs periodically, triggered by a systemd
timer unit, to ensure the certificate is always up-to-date.
The obvious drawback to this approach is the requirement for a static
token. Since there's not really a way to "renew" Service Account
tokens, it needs to be issued with a fairly long duration, to mitigate
the risk of being unable to fetch a new certificate once it has expired
because the token has also expired. This somewhat negates the advantage
of using certificates for authentication, since now the machine needs a
static, pre-defined secret.
At some point, I may deploy another instance of _step-ca_ to manage the
PostgreSQL client CA. Clients can then use e.g. `certbot` or `step ca
certificate` to obtain their certificates. I chose not to implement
this yet, though for a couple of reasons. First, I need to move the
Nextcloud database very soon, so we switch to using `restic` for backups
without having to deal with the database. Second, I am still
considering moving Nextcloud into Kubernetes eventually, where it will
be able to get the Secret directly; since Nextcloud is the only client
outside the cluster, it may not be worth setting up _step-ca_ in that
case.
The _nextcloud_ role originally handled setting up the PostgreSQL
database and assumed that it was running on the same server as Nextcloud
itself. I have factored out those tasks into their own role,
_nextcloud-db_, which can be applied to a separate host.
I have also introduced some new variables (`nextcloud_db_host`,
`nextcloud_db_name`, `nextcloud_db_user`, and `nextcloud_db_password`),
which can be used to specify how to connect to the database, if it is
hosted remotely. Since these variables are used by both the _nextcloud_
and _nextcloud-db_ roles, they are actually defined in a separate role,
_nextcloud-base_, upon which both depend.
When HAProxy binds to the IPv6 socket, it can handle both IPv6 and IPv4
clients. IPv4 clients are handled as IPv4-mapped IPv6 addresses, which
some backends (i.e. Apache) cannot support. To avoid this, we configure
HAProxy to bind to the IPv4 and IPv6 sockets separately, so that IPv4
addresses are handled as IPv4 addresses.
Expose a virtual host on a separate TCP port that uses the PROXY
protocol. This way, HAProxy can pass the original client IP address to
Jellyfin without terminating the TLS connection.
In order to enable authentication using LDAP over TLS in Jellyfin, we
need to expose the CA certificate that issues the LDAP server
certificates to the container.
The MinIO server for backups has special requirements for HTTPS. I want
to use subdomains for bucket names, so the certificate must have a
wildcard name, which requires using the DNS-01 challenge. Fortunately,
it is actually pretty easy to use `nsupdate` with GSS-TSIG
authentication to automate DNS record creation, and by default, all
domain-member machines can create any records. Thus, using the `manual`
auth plugin for `certbot` and a script to run `nsupdate`, obtaining the
wildcard certificate is fairly straightforward.
The biggest issue I encountered while developing this feature was
caching of NXDOMAIN responses. There doesn't seem to be a way to change
the TTL of the SOA record of the Active Directory DNS domain, which
defaults to 3600, meaning NXDOMAIN responses are always cached for an
hour. When adding a record using `nsupdate -g`, the tool always
performs a SOA lookup of new name to find the target zone for it. Since
the name does not exist yet, the domain controller responds with
NXDOMAIN, which gets cached by the main DNS server. Thus, even after
adding the record, the ACME server will not be able to resolve the
name for up to an hour. We can a void this by explicitly setting the
target zone. That would not work in a multi-domain forest, but
fortunately, we do not have to worry about that.
This role borrows some logic from the *postgresql-cert* role.
Eventually, I probably want to combine some of the steps from both of
these roles, possibly replacing the old *certbot* role.
The *minio-nginx* role configures nginx to proxy for MinIO. It uses the
"subdomain" pattern, as described in [Configure NGINX Proxy for MinIO
Server][0]; the S3 API and the console UI are accessible through
different domain names.
[0]: https://min.io/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html
Modern versions of Podman use Netavark, which needs to write various
files on the host file system (even when the container uses the
host's network namespace).
If the `minio_address` variable is specified, it will be passed with the
`--address` argument to `minio server`. This allows controlling the
socket the server binds to and listens on.
The `minio_browser_redirect_url` can be specified to populate the
similarly-named environment variable, which configures how MinIO serves
the web UI.
The `minio_domain` variable sets the `MINIO_DOMAIN` environment
variable, which enables DNS names (subdomains) for buckets, i.e.
`{bucket_name}.{MINIO_DOMAIN}`.
`wal-g` needs to connect to the PostgreSQL database system, so it should
run as the _postgres_ user, who has permission to connect, rather than
_root_, who does not.
Gitea needs SMTP configuration in order to send e-mail notifications
about e.g. pull requests. The `gitea_smtp` variable can be defined to
enable this feature.
Gitea complains if the `WORK_DIR` setting is not set. It tries to set
it itself, but fails because the configuration is read-only. The value
it uses is incorrect anyway (`/usr/local/bin`, since that's where the
`gitea` executable is).
I've already made a couple of mistakes keeping the HTTP and HTTPS rules
in sync. Let's define the sites declaratively and derive the HAProxy
rules from the data, rather then manually type the rules.
The *dch-proxy* role has not been used for quite some time. The web
server has been handling the reerse proxy functionality, in addition to
hosting websites. The drawback to using Apache as the reverse proxy,
though, is that it operates in TLS-terminating mode, so it needs to have
the correct certificate for every site and application it proxies for.
This is becoming cumbersome, especially now that there are several sites
that do not use the _pyrocufflink.net_ wildcard certificate. Notably,
Tabitha's _hatchlearningcenter.org_ is problematic because although the
main site are hosted by the web server, the Invoice Ninja client portal
is hosted in Kubernetes.
Switching back to HAProxy to provide the reverse proxy functionality
will eliminate the need to have the server certificate both on the
backend and on the reverse proxy, as it can operate in TLS-passthrough
mode. The main reason I stopped using HAProxy in the first place was
because when using TLS-passthrough mode, the original source IP address
is lost. Fortunately, HAProxy and Apache can both be configured to use
the PROXY protocol, which provides a mechanism for communicating the
original IP address while still passing through the TLS connection
unmodified. This is particularly important for Nextcloud because of its
built-in intrusion prevention; without knowing the actual source IP
address, it blocks _everyone_, since all connections appear to come from
the reverse proxy's IP address.
Combining TLS-passthrough mode with the PROXY protocol resolves both the
certificate management issue and the source IP address issue.
I've cleaned up the _dch-proxy_ role quite a bit in this commit.
Notably, I consolidated all the backend and frontend definitions into a
single file; it didn't really make sense to have them all separate,
since they were managed by the same role and referred to each other. Of
course, I had to update the backends to match the currently-deployed
applications as well.
The *postfix* role will now generate configuration and a lookup table
for [canonical address mapping][0] of email recipients. To configure
the mapping, the `postfix_recipient_canonical_map` must be a dictionary
of source-target addresses, e.g.:
```yaml
postfix_recipient_canonical_map:
my.bad.email@fake.test: my.real.email@example.com
```
[0]: https://www.postfix.org/ADDRESS_REWRITING_README.html#canonical
If winbind is unable to communicate with any domain controller, the
`pam_winbind.so` module will time out. In _auth_ and _account_ context,
this was not an issue, at least for local users, because other modules
terminated the stack before `pam_winbind.so` was called. In _session_
context, though, nothing terminated the stack at all, so
`pam_winbind.so` was called unconditionally. This prevented even _root_
from logging in on the console. This made troubleshooting difficult,
especially for the VM hosts, when the domain controllers were down.
Deploying Caddy as a reverse proxy for Frigate enables HTTPS with a
certificate issued by the internal CA (via ACME) and authentication via
Authelia.
Separating the installation and base configuratieon of Caddy into its
own role will allow us to reuse that part for other sapplications that
use Caddy for similar reasons.
The *gasket-dkms* package provides the `gasket` and `apex` kernel
modules, which are needed fro the Google Coral Edge TPU. Since these
are out-of-tree modules, they are not allowed in Fedora proper, so they
are provided in a COPR, and have to be rebuilt for every kernel version.
The DKMS framework handles automatically building the modules whenever
the kernel updates.
For systems usign UEFI with SecureBoot enabled, kernel modules must be
signed by a key trusted by the platform. For locally-built modules, we
can use the Machine Owner Key (MOK). Unfortunately, enrolling a new MOK
requires rebooting and manual intervention during the boot process.
Therefore, the *gasket-dkms* role has a `pause` step to ensure someone
is paying attention and able handle the key enrollment interactively.
Eventually, I'd like to have an RPM package with these modules
pre-built, so production servers do not need the kernel development
tools (`perl`, `gcc`, headers, etc.). It will be tricky, though, to
make sure the modules get rebuilt for every kernel version as Fedora
releases them.
* Switch to Quadlet-style `.container` for systemd unit
* Update to new image tag naming scheme (not arch-specific)
* Use environment variables for secrets
* Allow the entire `frigate_config` variable to be overridden
The *useproxy* role configures the `http_proxy` et al. environmet
variables for systemd services and interactive shells. Additionally, it
configures Yum repositories to use a single mirror via the `baseurl`
setting, rather than a list of mirrors via `metalink`, since the proxy
a) the proxy only allows access to _dl.fedoraproject.org_ and b) the
proxy caches RPM files, but this is only effective if all clients use
the same mirror all the time.
The `useproxy.yml` playbook applies this role to servers in the
*needproxy* group.
Although it's rare, sometimes Samba crashes or fails to start. When
this happens, restarting it is almost always enough to get it working
again. Since all sorts of authentication problems can occur if one of
the domain controllers is down, it's probably best to just have systemd
automatically restart _samba.service_ if it ever stops for any reason.
The [postgres-exporter][0] exposes PostgreSQL server statistics to
Prometheus. It connects to a specified PostgreSQL server (in this
case, a server on the local machine via UNIX socket) and collects data
from the `pg_stat_activity`, et al. views. It needs the `pg_monitor`
role in order to be allowed to read the relevant metrics.
Since we're setting up the exporter to connect via UNIX socket, it needs
a dedicated OS user to match the PostgreSQL user in order to
authenticate via the _peer_ method.
[0]: https://github.com/prometheus-community/postgres_exporter/
WAL archives are not much good without a base backup onto which they
can be applied. Thus, we need to schedule WAL-G to create and upload a
backup periodically.
This role installs `wal-g` from the DCH Yum repository, and creates a
configuration file for it in `/etc/postgresql`. Additionally, it
installs a custom SELinux policy module that allows `wal-g` to run in
the `postgresql_t` domain (i.e. when spawned by the PostgreSQL server).
This role can be used to get a server certificate for PostgreSQL from an
ACME CA using `certbot`. It fetches the initial certificate and copies
it to the PostgreSQL configuration directory. It also sets up a
post-renewal hook script that copies updated certificates and reload
the server.
This rewrite brings a lot of improvements and new functionality to the
*postgresql-server* role. The most noticeable change is the
introduction of the `postgresql_config_dir` variable, which can be used
to specify a different location for the PostgreSQL server configuration
files, separate from the data storage directory. By default, the
variable is set to `/etc/postgresql`. For some situations, it may be
necessary to disable this functionality, which can be accomplished by
setting the value of `postgresql_config_dir` to the same path as
`pgdata_dir`. Note also that the `postgresql-setup` tool, and the
corresponding `postgresql-check-db-dir` script, which are included in
the Fedora/Red Hat distribution of PostgreSQL, do not support having
separate configuration and data directories, so their use has to be
disabled.
Another significant improvement is to how the `postgresql.conf` file
is generated. Any setting can be set now, using the `postgresql_config`
variable; any key in this dictionary will be written to the
configuration file. Note that configuration file syntax requires
single quotes around string values, so these have to be included in the
YAML value.
To support deploying standby servers, the role now supports running a
command to restore from a backup instead of running `initdb`.
Additionally, the `postgresql_standby` variable can be set to `true`
to create the `recovery.signal` file, configuring the server as a
standby.
Sending SIGHUP to the main PID (i.e. conmon) ends up stopping the
service. What we really want is to send the signal to main PID _inside_
the container. We can achieve this by using `podman kill` instead of
`kill`.
Without making the firewall changes permanent, when a server tries to
renew its certificate after rebooting, it will fail as the ACME server
cannot connect to the HTTP port.
Sometimes, the `collectd-version` script crashes or fails to start at
boot. Configuring systemd to automatically restart it will ensure that
it's always running, so machines' versions are consistently inventoried.
The `squid.service` systemd unit now correctly initializes the
configured cache directories, so we do not need to do it explicitly
before starting the server.
The *samba-cert* role configures `lego` and HAProxy to obtain an X.509
certificate via the ACME HTTP-01 challenge. HAProxy is necessary
because LDAP server certificates need to have the apex domain in their
SAN field, and the ACME server may contact *any* domain controller
server with an A record for that name. HAProxy will forward the
challenge request on to the first available host on port 5000, where
`lego` is listening to provide validation.
Issuing certificates this way has a couple of advantages:
1. No need for the wildcard certificate for the *pyrocufflink.blue*
domain any more
2. Renewals are automatic and handled by the server itself rather than
Ansible via scheduled Jenkins job
Item (2) is particularly interesting because it avoids the bi-monthly
issue where replacing the LDAP server certificate and restarting Samba
causes the Jenkins job to fail.
Naturally, for this to work correctly, all LDAP client applications
need to trust the certificates issued by the ACME server, in this case
*DCH Root CA R2*.
HAProxy uses a special configuration block, `resolvers`, to specify
how it should look up names in DNS. This configuration is used for
e.g. dynamically discovering backend servers via DNS A or SRV records.
Since resolvers are global, they need to be specified in the global
configuration file, rather than a per-application drop-in.
We will use this functionality for the ACME HTTP-01 challenge solver
for Samba AD domain controllers.