The linuxserver.io Unifi container stored Unifi server and device logs
under `/var/lib/unifi/logs`, while the new container stores them under
`/var/log/unifi`.
I continually struggle with machines' (physical and virtual, even the
Roku devices!) clocks getting out of sync. I have been putting off
fixing this because I wanted to set up a Windows-compatible NTP server
(i.e. on the domain controllers, with Kerberos signing), but there's
really no reason to wait for that to fix the clocks on all the
non-Windows machines, especially since there are exactly 0 Windows
machines on the network right now.
The *chrony* role and corresponding `chrony.yml` playbook are generic,
configured via the `chrony_pools`, `chrony_servers`, and `chrony_allow`
variables. The values for these variables will configure the firewall
to act as an NTP server, synchronizing with the NTP pool on the
Internet, while all other machines will synchronize with it. This
allows machines on networks without Internet access to keep their clocks
in sync.
The `root_authorized_keys` variable was originally defined only for the
*pyrocufflink* group. This used to effectively be "all" machines, since
everything was a member of the AD domain. Now that we're moving away
from that deployment model, we still want to have the break-glass
option, so we need to define the authorized keys for the _all_ group.
This was the last group that had an entire file encrypted with Ansible
Vault. Now that the Synapse server is long gone, rather than convert it
to having individually-encrypted values, we can get rid of it entirely.
While having a password set for _root_ provides a convenient way of
accessing a machine even if it is not available via SSH, using a static
password in this way is quite insecure and not worth the risk. I may
try to come up with a better way to set a unique password for each
machine eventually, but for now, having this password here is too
dangerous to keep.
This role can ensure PostgreSQL users and databases are created for
applications that are not themselves managed by Ansible. Notably, we
need to do this for anything deployed in Kubernetes that uses the
central database server.
The *k8s-iot-net-ctrl* group is for the Raspberry Pi that has the Zigbee
and Z-Wave controllers connected to it. This node runs the Zigbee2MQTT
and ZWaveJS2MQTT servers as Kubernetes pods.
Need to expose Victoria Metrics to the Internet so the `vmagent` process
on the VPS can push the metrics it has scraped from its Blackbox
exporter. Authelia needs to allow access to the `/insert/` paths, of
course.
The _remote-blackbox_ group defines a system that runs
_blackbox-exporter_ and _vmagent_ in a remote (cloud) location. This
system will monitor our public web sites. This will give a better idea
of their availability from the perspective of a user on the Internet,
which can be by factors that are necessarily visible from within the
network.
In the spirit of replacing bloated tools with unnecessary functionality
with smaller, more focused alternatives, we can use `doas` instead of
`sudo`. Originally, it was a BSD tool, but the Linux port supports PAM,
so we can still use `pam_auth_ssh_agent` for ppasswordless
authentication.
The Samba AD domain performs two important functions: centralized user
identity mapping via LDAP, and centralized authentication via
Kerberos/GSSAPI. Unfortunately, Samba, on both domain controllers and
members, is quite frustrating. The client, _winbind_, frequently just
stops working and needs to have its cache flushed in order to resolve
user IDs again. It also takes quite a lot of memory, something rather
precious on Raspberry Pis. The DC is also somewhat flaky at times, and
cumbersome to upgrade. In short, I really would like to get rid of as
much of it as possible.
For most use cases, OIDC can replace Kereros. For SSH specifically, we
can use SSH certificates (which are issued to OIDC tokens).
Unfortunately, user and group accounts still need ID numbers assigned,
which is what _winbind_ does. In reality, there's only one user that's
necessary: _dustin_. It doesn't make sense to bring along all the
baggage of Samba just to map that one account. Instead, it's a lot
simpler and more robust to create it statically.
So far, I have been managing Kubernetes worker nodes with Fedora CoreOS
Ignition, but I have decided to move everything back to Fedora and
Ansible. I like the idea of an immutable operating system, but the FCOS
implementation is not really what I want. I like the automated updates,
but that can be accomplished with _dnf-automatic_. I do _not_ like
giving up control of when to upgrade to the next Fedora release.
Mostly, I never did come up with a good way to manage application-level
configuration on FCOS machines. None of my experiments (Cue+tmpl,
KCL+etcd+Luci) were successful, which mostly resulted in my manually
managing configuration on nodes individually. Managing OS-level
configuration is also rather cumbersome, since it requires redeploying
the machine entirely. Altogether, I just don't think FCOS fits with my
model of managing systems.
This commit introduces a new playbook, `kubernetes.yml`, and a handful of
new roles to manage Kubernetes worker nodes running Fedora Linux. It
also adds two new deploy scripts, `k8s-worker.sh` and `k8s-longhorn.sh`,
which fully automate the process of bringing up worker nodes.
The target location for WAL archives and backups saved by WAL-G should
be separated based on the major version of PostgreSQL with which they
are compatible. This will make it easier to restore those backups,
since they can only be restored into a cluster of the same version.
Unfortunately, WAL-G does not natively handle this. In fact, it doesn't
really have any way of knowing the version of the PostgreSQL server it
is backing up, at least when it is uploading WAL archives. Thus, we
have to include the version number in the target path (S3 prefix)
manually. We can't rely on Ansible to do this, because there is no way
to ensure Ansible runs at the appropriate point during the upgrade
process. As such, we need to be able to modify the target location as
part of the upgrade, without causing a conflict with Ansible the next
time it runs.
To that end, I've changed how the _wal-g-pg_ role creates the
configuration file for WAL-G. Instead of rendering directly to
`wal-g.yml`, the role renders a template, `wal-g.yml.in`. This template
can include a `@PGVERSION@` specifier. The `wal-g-config` script will
then use `sed` to replace that specifier with the version of PostgreSQL
installed on the server, rendering the final `wal-g.yml`. This script
is called both by Ansible in a handler after generating the template
configuration, and also as a post-upgrade action by the
`postgresql-upgrade` script.
I originally wanted the `wal-g-config` script to use the version of
PostgreSQL specified in the `PG_VERSION` file within the data directory.
This would ensure that WAL-G always uploads/downloads files for the
matching version. Unfortunately, this introduced a dependency conflict:
the WAL-G configuration needs to be present before a backup can be
restored, but the data directory is empty until after the backup has
been restored. Thus, we have to use the installed server version,
rather than the data directory version. This leaves a small window
where WAL-G may be configured to point to the wrong target if the
`postgresql-upgrade` script fails and thus does not trigger regenerating
the configuration file. This could result in new WAL archives/backups
being uploaded to the old target location. These files would be
incompatible with the other files in that location, and could
potentially overwrite existing files. This is rather unlikely, since
the PostgreSQL server will not start if the _postgresql-upgrade.service_
failed. The only time it should be possible is if the upgrade fails in
such a way that it leaves an empty but valid data directory, and then
the machine is rebooted.
_loki1.pyrocufflink.blue_ replaces _loki0.pyrocufflink.blue_. The
former runs Fedora Linux and is managed by Ansible, while the latter ran
Fedora CoreOS and was managed by Ignition and _cfg_.
I want to publish the _20125_ Status application to an F-Droid
repository to make it easy for Tabitha to install and update. F-Droid
repositories are similar to other package repositories: a collection of
packages and some metadata files. Although there is a fully-fledged
server-side software package that can manage F-Droid repositories, it's
not required: the metadata files can be pre-generated and then hosted by
a static web server just fine.
This commit adds configuration for the web server and reverse proxy to
host the F-Droid repository at _apps.du5t1n.xyz_.
Bitwarden has not worked correctly for clients using the non-canonical
domain name (i.e. _bitwarden.pyrocufflink.blue_) for quite some time.
This still trips me up occasionally, though, so hopefully adding a
server-side redirect will help. Eventually, I'll probably remove the
non-canonical name entirely.
The current Grafana Loki server, *loki0.pyrocufflink.blue*, runs Fedora
CoreOS and is managed by Ignition and *cfg*. Since I have declared
*cfg* a failed experiment, I'm going to re-deploy Loki on a new VM
running Fedora Linux and managed by Ansible.
The *loki* role installs Podman and defines a systemd-managed container
to run Grafana Loki.
MinIO/S3 clients generate a _lot_ of requests. It's also not
particularly useful to have these stored in Loki anyway. As such, we'll
stop routing them to syslog/journal.
Having access logs is somewhat useful for troubleshooting, but really
for only live requests (i.e. what's happening right now). We therefore
keep the access logs around in a file, but only for one day, so as not
to fill up the filesystem with logs we'll never see.
_wal-g_ can send StatsD metrics when it completes an upload/backup/etc.
task. Using the `statsd_exporter`, we can capture these metrics and
make them available to Victoria Metrics.
Nextcloud writes JSON-structured logs to
`/var/lib/nextcloud/data/nextcloud.log`. These logs contain errors,
etc. from the Nextcloud server, which are useful for troubleshooting.
Having them in Loki will allow us to view them in Grafan as well as
generate alerts for certain events.
_WAL-G_ and _restic_ both generate a lot of HTTP traffic, which fills up
the log volume pretty quickly. Let's reduce the number of days logs are
kept on the file system. Logs are shipped to Loki anyway, so there's
not much need to have them local very long.
Invoice Ninja needs to be accessible from the Internet in order to
receive webhooks from Stripe. Additionally, Apple Pay requires
contacting Invoice Ninja for domain verification.
Gitea and Vaultwarden both have SQLite databases. We'll need to add
some logic to ensure these are in a consistent state before beginning
the backup. Fortunately, neither of them are very busy databases, so
the likelihood of an issue is pretty low. It's definitely more
important to get backups going again sooner, and we can deal with that
later.
The `restic.yml` playbook applies the _restic_ role to hosts in the
_restic_ group. The _restic_ role installs `restic` and creates a
systemd timer and service unit to run `restic backup` every day.
Restic doesn't really have a configuration file; all its settings are
controlled either by environment variables or command-line options. Some
options, such as the list of files to include in or exclude from
backups, take paths to files containing the values. We can make use of
these to provide some configurability via Ansible variables. The
`restic_env` variable is a map of environment variables and values to
set for `restic`. The `restic_include` and `restic_exclude` variables
are lists of paths/patterns to include and exclude, respectively.
Finally, the `restic_password` variable contains the password to decrypt
the repository contents. The password is written to a file and exposed
to the _restic-backup.service_ unit using [systemd credentials][0].
When using S3 or a compatible service for respository storage, Restic of
course needs authentication credentials. These can be set using the
`restic_aws_credentials` variable. If this variable is defined, it
should be a map containing the`aws_access_key_id` and
`aws_secret_access_key` keys, which will be written to an AWS shared
credentials file. This file is then exposed to the
_restic-backup.service_ unit using [systemd credentials][0].
[0]: https://systemd.io/CREDENTIALS/
Since LAN clients have IPv6 addresses now, some may try to connect to
the database over IPv6, so we need to allow this in the host-based
authentication rules.
Moving the Nextcloud database to the central PostgreSQL server will
allow it to take advantage of the monitoring and backups in place there.
For backups specifically, this will make it easier to switch from BURP
to Restic, since now only the contents of the filesystem need backed up.
The PostgreSQL server on _db0_ requires certificate authentication for
all clients. The certificate for Nextcloud is stored in a Secret in
Kubernetes, so we need to use the _nextcloud-db-cert_ role to install
the script to fetch it. Nextcloud configuration doesn't expose the
parameters for selecting the certificate and private key files, but
fortunately, they can be encoded in the value provided to the `host`
parameter, though it makes for a rather cumbersome value.
*chromie.pyrocufflink.blue* will replace *burp1.pyrocufflink.blue* as
the backup server. It is running on the hardware that was originally
*nvr1.pyrocufflink.blue*: a 1U Jetway server with an Intel Celeron N3160
CPU and 4 GB of RAM.
This playbook uses the *minio-nginx* and *minio-backups-cert* role to
deploy MinIO with nginx.
The S3 API server is *s3.backups.pyrocufflink.blue*, and buckets can be
accessed as subdomains of this name.
The Admin Console is *minio.backups.pyrocufflink.blue*.
Certificates are issued by DCH CA via ACME using `certbot`.
Gitea needs SMTP configuration in order to send e-mail notifications
about e.g. pull requests. The `gitea_smtp` variable can be defined to
enable this feature.
I've already made a couple of mistakes keeping the HTTP and HTTPS rules
in sync. Let's define the sites declaratively and derive the HAProxy
rules from the data, rather then manually type the rules.
The *dch-proxy* role has not been used for quite some time. The web
server has been handling the reerse proxy functionality, in addition to
hosting websites. The drawback to using Apache as the reverse proxy,
though, is that it operates in TLS-terminating mode, so it needs to have
the correct certificate for every site and application it proxies for.
This is becoming cumbersome, especially now that there are several sites
that do not use the _pyrocufflink.net_ wildcard certificate. Notably,
Tabitha's _hatchlearningcenter.org_ is problematic because although the
main site are hosted by the web server, the Invoice Ninja client portal
is hosted in Kubernetes.
Switching back to HAProxy to provide the reverse proxy functionality
will eliminate the need to have the server certificate both on the
backend and on the reverse proxy, as it can operate in TLS-passthrough
mode. The main reason I stopped using HAProxy in the first place was
because when using TLS-passthrough mode, the original source IP address
is lost. Fortunately, HAProxy and Apache can both be configured to use
the PROXY protocol, which provides a mechanism for communicating the
original IP address while still passing through the TLS connection
unmodified. This is particularly important for Nextcloud because of its
built-in intrusion prevention; without knowing the actual source IP
address, it blocks _everyone_, since all connections appear to come from
the reverse proxy's IP address.
Combining TLS-passthrough mode with the PROXY protocol resolves both the
certificate management issue and the source IP address issue.
I've cleaned up the _dch-proxy_ role quite a bit in this commit.
Notably, I consolidated all the backend and frontend definitions into a
single file; it didn't really make sense to have them all separate,
since they were managed by the same role and referred to each other. Of
course, I had to update the backends to match the currently-deployed
applications as well.