`/etc/containers/registries.conf.d` is distinct from
`/etc/containers/registries.d`. The latter contains YAML files relating
to image signatures, while the former contains TOML files relating to
registry locations.
It turns out _nginx_ has a built-in default value for `access_log` and
`error_log`, even if they are omitted from the configuration file. To
actually disable writing logs to a file, we need to explicitly specify
`off`.
Using files for certificates and private keys is less than ideal.
The only way to "share" a certificate between multiple hosts is with
symbolic links, which means the configuration policy has to be prepared
for each managed system. As we're moving toward a much more dynamic
environment, this becomes problematic; the host-provisioner will never
be able to copy a certificate to a new host that was just created.
Further, I have never really liked the idea of storing certificates and
private keys in Git anyway, even if it is in a submodule with limited
access.
Now that we're serving kickstart files from the PXE server, we need to
have a correctly-configured HTTPD server, with valid HTTPS certificates,
running there.
The _containers-image_ role configures _containers-registries.conf(5)_ and
_containers-cert.d(5)_, which are used by CRI-O (and `podman`).
Specifically, we'll use these to redirect requests for images on Docker
Hub (docker.io) to the internal caching proxy.
Docker Hub's rate limits are so low now that they've started to affect
my home lab. Deploying a caching proxy and directing all pull requests
through it should prevent exceeding the limit. It will also help
prevent containers from starting if access to the Internet is down, as
long as their images have been cached recently.
The *lego-nginx* role automates obtaining certificates for *nginx* via
ACME using `lego`. It generates a shell script with the appropriate
arguments for `lego run`, runs it once to obtain a certificate
initially, then schedules it to run periodically via a systemd timer
unit. Using `lego`'s "hook" capability, the script signals the `nginx`
server process to reload. This uses `doas` for now, but could be
adapted easily to use `sudo`, if the need ever arises.
Now that kickstart scripts are generated from templates by a Jenkins
job, they need to be stored somewhere besides Gitea. It makes sense to
serve them from the PXE server, since it's involved in the installation
process anyway (at least for physical machines). Thus, we need a path
where the generated files can be uploaded by Jenkins and served by
Apache.
The version of Samba in Fedora 42 has got some really weird bugs. In
this case, it seems `net ads kerberos kinit -P` no longer works. It
prints a vague `NT_STATUS_INTERNAL_ERROR` message, with no other
indication of what went wrong. Fortunately, it's still possible to get
a ticket-granting ticket for the machine account using the host keytab.
We don't want `podman` pulling a new container image and updating
without our concent. The image will already be there on the first
start, since we pulled it in an Ansible task.
The `:Z` flag tells the container runtime to run `chcon` recursively on
the specified path, in order to ensure that the files are accessible
inside the container. For a very large volume like the MinIO storage
directory, this can take an extremely long time. It's really only
necessary on the first startup anyway, because the context won't change
after that. To avoid spending a bunch of time, we can set the context
correctly when we create the directory, and then not worry about it
after that.
Using the Kubernetes API to create bootstrap tokens makes it possible
for the host-provisioner to automatically add new machines to the
Kubernetes cluster. The host provisioner cannot connect to existing
machines, and thus cannot run the `kubeadm token create` command on
a control plane node. With the appropriate permissions assigned to the
service account associated with the pod it runs in, though, it can
directly create the secret via the API.
There are actually two pieces of information required for a node to
join a cluster, though: a bootstrap token and the CA certificate. When
using the `kubeadm token create` command to issue a bootstrap token, it
also provides (a hash of) the CA certificate with the command it prints.
When creating the token manually, we need an alternative method for
obtaining and distributing the CA certificate, so we use the
`cluster-info` ConfigMap. This contains a stub `kubeconfig` file, which
includes the CA certificate, which can be used by the `kubeadm join`
command with a join configuration file. Generating both of these files
may be a bit more involved than computing the CA certificate hash and
passing that on the command line, but there are a couple of advantages.
First, it's more extensible, as the join configuration file can specify
additional configuration for the node (which we may want to use later).
It's also somewhat more secure, since the token is not passed as a
command-line argument.
Interestingly, the most difficult part of this implementation was
getting the expiration timestamp. Ansible exposes very little date math
capability; notably lacking is the ability to construct a `timedelta`
object, so the only way to get a timestamp in the future is to convert
the `datetime` object returned by `now` to a Unix timestamp and add some
number of seconds to it. Further, there is no direct way to get a
`datetime` object from the computed Unix timestamp value, but we can
rely on the fact that Python class methods can be called on instances,
too, so `now().fromtimestamp()` works the same as
`datetime.fromtimestamp()`.
I've become rather frusted witih Grafana Loki lately. It has several
bugs that affect my usage, including issues with counting and
aggregation, completely broken retention and cleanup, spamming itself
with bogus error log messages, and more. Now that VitoriaLogs has
first-class support in Grafana and support for alerts, it seems like a
good time to try it out. It's under very active development, with bugs
getting fixed extremely quickly, and new features added constantly.
Indeed, as I was experimenting with it, I thought, "it would be nice if
the web UI could decode ANSI escapes for terminal colors," and just a
few days later, that feature was added! Native support for syslog is
also a huge benefit, as it will allow me to collect logs directly from
network devices, without first collecting them into a file on the Unifi
controller.
This new role deploys VictoriaLogs in a manner very similar to how I
have Loki set up, as a systemd-managed Podman container. As it has no
built-in authentication or authorization, we rely on Caddy to handle
that. As with Loki, mTLS is used to prevent anonymous access to
querying the logs, however, authentication via Authelia is also an
option for human+browser usage. I'm re-using the same certificate
authority as with Loki to simplify Grafana configuration. Eventually, I
would like to have a more robust PKI, probably using OpenBao, at which
point I will (hopefully) have decided which log database I will be
using, and can use a proper CA for it.
HTTP 301 is "moved permanently." Browsers will cache this response and
never send the request to the real server again. We need to use a
temporary redirect, such as "see other" to avoid getting stuck in a
login loop.
Frigate has evolved a lot over the past year or so since v0.13.
Notably, some of the configuration options have been renamed, and
_events_ have become _alerts_ and _detections_. There's also now
support for authenication, though we don't need it because we're using
Authelia.
Although running `dnf` from the command line works without explicitly
configuring the proxy, because it inherits the environment variables set
by PAM on login from the user's shell, the `dnf` Ansible module does
not, as it does not inherit those variables. Thus, we need to
explicitly configure the `proxy` setting in `dnf.conf` in order to be
able to install packages via Ansible.
Since `dnf` does not have separate settings for different protocols
(e.g. HTTP, HTTPS, FTP), we need a way to specify which of the
configured proxies to use if there are multiple. As such, the
*useproxy* role will attempt to use the value of the `dnf_proxy`
variable, if it is set, falling back to `yum_proxy` and finally
`http_proxy`. This should cover most situations without any explicit
configuration, but allows flexibility for other cases.
The Unifi Network controller runs a syslog server (listening on UDP port
5514) where Unifi devices can send their logs. We need to open the port
in the firewall in order for it to receive log messages and write them
to disk.
Some time ago, _libvirt_ was refactored to use separate daemons and
sockets for each of its responsibilities, and the original "monolithic"
`libvirtd` was made obsolete. The Fedora packages have more recently
been adjusted to favor this new approach, and now default to omitting
the monolithic daemon entirely (when `install_weak_deps` is disabled).
One interesting packaging snafu, though, is that without the weak
dependencies, there is _no_ way for clients to connect by default.
Clients run `which virt-ssh-helper` to see if it is installed, which it
is, but `which` is not. They then fall back to running `nc`, which is
_also_ not installed. So even though the tools they actually need are
present, their logic for detecting this is broken. As such, we need to
explicitly install `which` to satisfy them.
The _linuxserver.io_ image for UniFi Network is deprecated. It sucked
anyway. I've created a simple image based on Debian that installs the
_unifi_ package from the upstream apt repository. This image doesn't
require running anything as _root_, so it doesn't need a user namespace.
I continually struggle with machines' (physical and virtual, even the
Roku devices!) clocks getting out of sync. I have been putting off
fixing this because I wanted to set up a Windows-compatible NTP server
(i.e. on the domain controllers, with Kerberos signing), but there's
really no reason to wait for that to fix the clocks on all the
non-Windows machines, especially since there are exactly 0 Windows
machines on the network right now.
The *chrony* role and corresponding `chrony.yml` playbook are generic,
configured via the `chrony_pools`, `chrony_servers`, and `chrony_allow`
variables. The values for these variables will configure the firewall
to act as an NTP server, synchronizing with the NTP pool on the
Internet, while all other machines will synchronize with it. This
allows machines on networks without Internet access to keep their clocks
in sync.
This role can ensure PostgreSQL users and databases are created for
applications that are not themselves managed by Ansible. Notably, we
need to do this for anything deployed in Kubernetes that uses the
central database server.
The _iscsi.socket_ unit gets enabled by default with the
_iscsi-initiator-utils_ package is installed, but it won't start
automatically until the next boot. Without this service running,
Longhorn volumes will not be able to attach to the node, so we need to
explicitly ensure it is running before any workloads are assigned to the
node.
Fedora 41 introduced versioned package names for Kubernetes components,
including CRI-O. The intent is to allow multiple versions of Kubernetes
to be available (but not necessarily installed) within a given Fedora
release. In order to use these packages, we need to set the desired
Kubernetes version, via the new `kubernetes_version` Ansible variable.
At some point, the _qemu-kvm_ package became a meta-package that
installs _everything_ QEMU-related. All drivers, backends, frontends,
etc. get pulled in, which results in a huge amount of wasted space.
Recently, the VM hosts started getting alerts about their `/` filesystem
getting too full, which is how I discovered this.
We can dramatically reduce the disk space footprint by installing only
the "core" package and the drivers we need for our servers.
After making and applying this change, which marks the listed packages
as "leaf" installs, I then manually uninstalled the _qemu-kvm_ package.
This uninstalled everything else that is not specifically listed.
Like the _blackbox-exporter_ role, the _vmagent_ role now deploys
`vmagent` as a container. This simplifies the process considerably,
eliminating the download/transfer step.
While refactoring this role, I also changed how the trusted CA
certificates are handled. Rather than copy files, the role now expects
a `vmagent_ca_certs` variable. This variable is a mapping of
certificate name (file name without extension) to PEM contents. This
allows certificates to be defined using normal host/group variables.
Instead of downloading the `blackbox_exporter` binary from GitHub and
copying it to the managed node, the _blackbox-exporter_ role now
installs _podman_ and configures a systemd container unit (Quadlet) to
run it in a container. This simplifies the deployment considerably, and
will make updating easier (just run the playbook with `-e
blackbox_exporter_pull_image=true`).
I want to use Gita as the canonical source for Anaconda kickstart
scripts. There are certain situations, however, where they cannot be
accessed via HTTPS, such as on a Raspberry Pi without an RTC, since it
cannot validate the certificate without the correct time. Thus, the
web server must not force an HTTPS redirect for these, but serve them
directly.
Jellyfin is one of those stupid programs that thinks it needs to mutate
its own config. At startup, it apparently reads `system.xml` and then
writes it back out. When it does this, it trims the final newline from
the file. Then, the next time Ansible runs, the template rewrites the
file with the trailing newline, and thus determines that the file has
changed and restarts the service. This cycle has been going on for a
while and is rather annoying.
The systemd unit configuration installed by Fedora's _kubeadm_ package
does not pass the `--config` argument to the kubelet service. Without
this argument, the kubelet will not read the configuration file
generated by `kubeadm` from the `kubelet-config` ConfigMap. Thus,
various features will not work correctly, including server TLS
bootstrap.
In the spirit of replacing bloated tools with unnecessary functionality
with smaller, more focused alternatives, we can use `doas` instead of
`sudo`. Originally, it was a BSD tool, but the Linux port supports PAM,
so we can still use `pam_auth_ssh_agent` for ppasswordless
authentication.
So far, I have been managing Kubernetes worker nodes with Fedora CoreOS
Ignition, but I have decided to move everything back to Fedora and
Ansible. I like the idea of an immutable operating system, but the FCOS
implementation is not really what I want. I like the automated updates,
but that can be accomplished with _dnf-automatic_. I do _not_ like
giving up control of when to upgrade to the next Fedora release.
Mostly, I never did come up with a good way to manage application-level
configuration on FCOS machines. None of my experiments (Cue+tmpl,
KCL+etcd+Luci) were successful, which mostly resulted in my manually
managing configuration on nodes individually. Managing OS-level
configuration is also rather cumbersome, since it requires redeploying
the machine entirely. Altogether, I just don't think FCOS fits with my
model of managing systems.
This commit introduces a new playbook, `kubernetes.yml`, and a handful of
new roles to manage Kubernetes worker nodes running Fedora Linux. It
also adds two new deploy scripts, `k8s-worker.sh` and `k8s-longhorn.sh`,
which fully automate the process of bringing up worker nodes.
The target location for WAL archives and backups saved by WAL-G should
be separated based on the major version of PostgreSQL with which they
are compatible. This will make it easier to restore those backups,
since they can only be restored into a cluster of the same version.
Unfortunately, WAL-G does not natively handle this. In fact, it doesn't
really have any way of knowing the version of the PostgreSQL server it
is backing up, at least when it is uploading WAL archives. Thus, we
have to include the version number in the target path (S3 prefix)
manually. We can't rely on Ansible to do this, because there is no way
to ensure Ansible runs at the appropriate point during the upgrade
process. As such, we need to be able to modify the target location as
part of the upgrade, without causing a conflict with Ansible the next
time it runs.
To that end, I've changed how the _wal-g-pg_ role creates the
configuration file for WAL-G. Instead of rendering directly to
`wal-g.yml`, the role renders a template, `wal-g.yml.in`. This template
can include a `@PGVERSION@` specifier. The `wal-g-config` script will
then use `sed` to replace that specifier with the version of PostgreSQL
installed on the server, rendering the final `wal-g.yml`. This script
is called both by Ansible in a handler after generating the template
configuration, and also as a post-upgrade action by the
`postgresql-upgrade` script.
I originally wanted the `wal-g-config` script to use the version of
PostgreSQL specified in the `PG_VERSION` file within the data directory.
This would ensure that WAL-G always uploads/downloads files for the
matching version. Unfortunately, this introduced a dependency conflict:
the WAL-G configuration needs to be present before a backup can be
restored, but the data directory is empty until after the backup has
been restored. Thus, we have to use the installed server version,
rather than the data directory version. This leaves a small window
where WAL-G may be configured to point to the wrong target if the
`postgresql-upgrade` script fails and thus does not trigger regenerating
the configuration file. This could result in new WAL archives/backups
being uploaded to the old target location. These files would be
incompatible with the other files in that location, and could
potentially overwrite existing files. This is rather unlikely, since
the PostgreSQL server will not start if the _postgresql-upgrade.service_
failed. The only time it should be possible is if the upgrade fails in
such a way that it leaves an empty but valid data directory, and then
the machine is rebooted.
The `postgresql-upgrade` script will now run any executables located in
the `/etc/postgresql/post-upgrade.d` directory. This will allow making
arbitrary changes to the system after a PostgreSQL major version
upgrade. Notably, we will use this capability to change the WAL-G
configuration to upload WAL archives and backups to the correct
version-specific location.
There's a bit of a dependency loop between the _postgresql-server_ role
and other roles that supplement it, like _wal-g-pg_ and
_postgresql-cert_. The latter roles need PostgreSQL installed, but when
those roles are used, the server cannot be started until they have been
applied. To resolve this situation, I've broken out the initial
installation steps from the _postgresql-server_ role into
_postgresql-server-base_. Roles that need PostgreSQL installed, but
need to be applied before the server can start, can depend on this role.
The `postgresql-upgrade.sh` script arranges to run `pg_upgrade` after a
major PostgreSQL version update. It's scheduled by a systemd unit,
_postgresql-upgrade.service_, which runs only after an OS update.
Tasks that must run as the _postgres_ user need to explicity enable
`become`, in case it is not already enabled at the playbook level. This
can happen, for example, when the playbook is running directly as root.
Using `tmux`, we can spawn a bunch of `picocom` processes for the serial
ports connected to other server's console ports. The
_serial-terminal-server_ service manages the `tmux` server process,
while the individual _serial-terminal-server-window@.service_ units
create a window in the `tmux` session.
The serial terminal server runs as a dedicated user. The SSH server is
configured to force this user to connect to the `tmux` session. This
should help ensure the serial consoles are accessible, even if the
Active Directory server is unavailable.
WAL-G slows down significantly when too many backups are kept. We need
to periodically clean up old backups to maintain a reasonable level of
performance, and also keep from wasting space with useless old backups.
I want to publish the _20125_ Status application to an F-Droid
repository to make it easy for Tabitha to install and update. F-Droid
repositories are similar to other package repositories: a collection of
packages and some metadata files. Although there is a fully-fledged
server-side software package that can manage F-Droid repositories, it's
not required: the metadata files can be pre-generated and then hosted by
a static web server just fine.
This commit adds configuration for the web server and reverse proxy to
host the F-Droid repository at _apps.du5t1n.xyz_.
Bitwarden has not worked correctly for clients using the non-canonical
domain name (i.e. _bitwarden.pyrocufflink.blue_) for quite some time.
This still trips me up occasionally, though, so hopefully adding a
server-side redirect will help. Eventually, I'll probably remove the
non-canonical name entirely.
Although listening on only an IPv6 socket works fine for the HTTP
front-end, it results in HAProxy logging client requests as IPv4-mapped
IPv6 addresses. For visual processing, this is ok, but it breaks Loki's
`ip` filter.