The _pyrocufflink.net_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
To avoid having separate certificates for the canonical
_www.hatchlearningcenter.org_ site and all the redirects, we'll combine
these virtual hosts into one. We can use a `RewriteCond` to avoid the
redirect for the canonical name itself.
The Nextcloud administration overview page listed a bunch of deployment
configuration warnings that needed to be addressed:
* Set the default phone region
* Define a maintenance window starting at 0600 UTC
* Increase the PHP memory limit to 1GiB
* Increase the PHP OPCache interned strings buffer size
* Increase the allowed PHP OPcache memory limit
* Fix Apache rewrite rules for /.well-known paths
Since the reverse proxy does TLS pass-through instead of termination,
the original source address is lost. Since the source address is
important for logging, rate limiting, and access control, we need to use
the HAProxy PROXY protocol to pass it along to the web server.
Since the PROXY protocol works at the TCP layer, _all_ connections must
use it. Fortunately, all of the sites hosted by the public web server
are in fact public and only accessed through HAProxy. Similarly,
enabling it for one named virtual host enables it for all virtual hosts
on that port. Thus, we only have to explicitly set it for one site, and
all the rest will use it as well.
Instead of waking every 30 seconds, the queue loop in
`repohost-createrepo.sh` now only wakes when it receives an inotify
event indicating the queue file has been modified. To avoid missing
events that occured while a `createrepo` process was running, there's
now an inner loop that runs until the queue is completely empty, before
returning to blocking on `inotifywait`.
We don't really use this site for screenshot sharing any more. It's
cool to keep to look at old screenshots, so I've saved a static snapshot
of it that can be hosted by plain ol' Apache.
Since Nextcloud uses the _pyrocufflink.net_ wildcard certificate, we can
load it directly from the Kubernetes Secret, rather than from the file
in the _certs_ submodule, just like Gitea et al.
The _dustinandtabitha.com_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
Since the _frigate.service_ unit depends on _dev-apex_0.device_,
`/dev/apex_0` needs to have the `systemd` "tag" on its udev device info.
Without this tag, systemd will not "see" the device and thus will not
mark the `.device` unit as active.
[fluent-bit][0] is a generic, highly-configurable log collector. It was
apparently initially developed for fluentd, but is has so many output
capabilities that it works wil many different log aggregation systems,
including Victoria Logs.
Although Victoria Logs supports the Loki input format, and therefore
_Promtail_ would work, I want to try to avoid depending on third-party
repositories. _fluent-bit_ is packaged by Fedora, so there shouldn't be
any dependency issues, etc.
[0]: https://fluentbit.io
Logging to syslog will allow messages to be aggregated in the central
server (Loki now, Victoria Logs eventually), so I don't have to SSH into
the web server to check for errors.
The _dustin.hatch.name_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
The _chmod777.sh_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
With the transition to modular _libvirt_ daemons, the SELinux policy is
a bit more granular. Unfortunately, the new policy has a funny [bug]: it
assumes directories named `storage` under `/run/libvirt` must be for
_virtstoraged_ and labels them as such, which prevents _virtnetworkd_
from managing a virtual network named `storage`.
To work around this, we need to give `/run/libvirt/network` a special
label so that its children do not match the file transition pattern for
_virtstoraged_ and thus keep their `virtnetworkd_var_run_t` label.
[bug]: https://bugzilla.redhat.com/show_bug.cgi?id=2362040
If the _libvirt_ daemon has not fully started by the time `vm-autostart`
runs, we want it to fail and try again shortly. To allow this, we first
attempt to connect to the _libvirt_ socket, and if that fails, stop
immediately and try again in a second. This way, the first few VMs
don't get skipped with the assumption that they're missing, just because
the daemon wasn't ready yet.
_libvirt_ has gone full Polkit, which doesn't work with systemd dynamic
users. So, we have to run `vm-autostart` as root (with no special
OS-level privileges) in order for Polkit to authorize the connection to
the daemon socket.
I don't know what the deal is, but restarting the _victoria-logs_
container makes it lose inbound network connectivity. It appears that
the firewall rules that forward the ports to the container's namespace
seem to get lost, but I can't figure out why. To fix it, I have to
flush the netfilter rules (`nft flush ruleset`) and then restart
_firewalld_ and _victoria-logs_ to recreate them. This is rather
cumbersome, and since Victoria Logs runs on a dedicated VM, there's
really not much advantage to isolating the container's network.
The _apps.du5t1n.xyz_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
Apache supports fetching server certificates via ACME (e.g. from Let's
Encrypt) using a new module called _mod_md_. Configuring the module is
fairly straightforward, mostly consisting of `MDomain` directives that
indicate what certificates to request. Unfortunately, there is one
rather annoying quirk: the certificates it obtains are not immediately
available to use, and the server must be reloaded in order to start
using them. Fortunately, the module provides a notification mechanism
via the `MDNotifyCmd` directive, which will run the specified command
after obtaining a certificate. The command is executed with the
privileges of the web server, which does not have permission to reload
itself, so we have to build in some indirection in order to trigger the
reload: the notification runs a script that creates an empty file in the
server's state directory; systemd is watching for that file to be
created, then starts another service unit to trigger the actual reload,
then removes trigger file.
Website roles, etc. that want to switch to using _mod_md_ to manage
their certificates should depend on this role and add an `MDomain`
directive to their Apache configuration file fragments.
We don't want the iSCSI and NFS client tools to be installed on control
plane nodes. Let's move this task to the _k8s-worker_ role so it will
only apply to worker nodes.
Since the _haproxy_ role relies on other roles to provide drop-in
configuration files for actual proxy configuration, we cannot start the
service in the base role. If there are any issues with the drop-in
files that are added later, the service will not be able to start,
causing the playbook to fail and thus never be able to update the broken
configuration. The dependent roles need to be responsible for starting
the service once they have put their configuration files in place.
The _haproxy_ role only installs HAProxy and provides some basic global
configuration; it expects another role to depend on it and provide
concrete proxy configuration with drop-in configuration files. Thus, we
need a role specifically for the Kubernetes control plane nodes to
provide the configuration to proxy for the API server.
[keepalived][0] is a free implementation of the Virtual Router
Redundancy Protocol (VRRP), which is a simple method for automatically
assigning an IP address to one of several potential hosts based on
certain criteria. It is particularly useful in conjunction with a load
balancer like HAProxy, to provide layer 3 redundancy in addition to
layer 7. We will use it for both the reverse proxy for the public
websites and the Kubernetes API server.
[0]: https://www.keepalived.org/
`/etc/containers/registries.conf.d` is distinct from
`/etc/containers/registries.d`. The latter contains YAML files relating
to image signatures, while the former contains TOML files relating to
registry locations.
It turns out _nginx_ has a built-in default value for `access_log` and
`error_log`, even if they are omitted from the configuration file. To
actually disable writing logs to a file, we need to explicitly specify
`off`.
Using files for certificates and private keys is less than ideal.
The only way to "share" a certificate between multiple hosts is with
symbolic links, which means the configuration policy has to be prepared
for each managed system. As we're moving toward a much more dynamic
environment, this becomes problematic; the host-provisioner will never
be able to copy a certificate to a new host that was just created.
Further, I have never really liked the idea of storing certificates and
private keys in Git anyway, even if it is in a submodule with limited
access.
Now that we're serving kickstart files from the PXE server, we need to
have a correctly-configured HTTPD server, with valid HTTPS certificates,
running there.
The _containers-image_ role configures _containers-registries.conf(5)_ and
_containers-cert.d(5)_, which are used by CRI-O (and `podman`).
Specifically, we'll use these to redirect requests for images on Docker
Hub (docker.io) to the internal caching proxy.
Docker Hub's rate limits are so low now that they've started to affect
my home lab. Deploying a caching proxy and directing all pull requests
through it should prevent exceeding the limit. It will also help
prevent containers from starting if access to the Internet is down, as
long as their images have been cached recently.
The *lego-nginx* role automates obtaining certificates for *nginx* via
ACME using `lego`. It generates a shell script with the appropriate
arguments for `lego run`, runs it once to obtain a certificate
initially, then schedules it to run periodically via a systemd timer
unit. Using `lego`'s "hook" capability, the script signals the `nginx`
server process to reload. This uses `doas` for now, but could be
adapted easily to use `sudo`, if the need ever arises.
Now that kickstart scripts are generated from templates by a Jenkins
job, they need to be stored somewhere besides Gitea. It makes sense to
serve them from the PXE server, since it's involved in the installation
process anyway (at least for physical machines). Thus, we need a path
where the generated files can be uploaded by Jenkins and served by
Apache.
The version of Samba in Fedora 42 has got some really weird bugs. In
this case, it seems `net ads kerberos kinit -P` no longer works. It
prints a vague `NT_STATUS_INTERNAL_ERROR` message, with no other
indication of what went wrong. Fortunately, it's still possible to get
a ticket-granting ticket for the machine account using the host keytab.
We don't want `podman` pulling a new container image and updating
without our concent. The image will already be there on the first
start, since we pulled it in an Ansible task.
The `:Z` flag tells the container runtime to run `chcon` recursively on
the specified path, in order to ensure that the files are accessible
inside the container. For a very large volume like the MinIO storage
directory, this can take an extremely long time. It's really only
necessary on the first startup anyway, because the context won't change
after that. To avoid spending a bunch of time, we can set the context
correctly when we create the directory, and then not worry about it
after that.
Using the Kubernetes API to create bootstrap tokens makes it possible
for the host-provisioner to automatically add new machines to the
Kubernetes cluster. The host provisioner cannot connect to existing
machines, and thus cannot run the `kubeadm token create` command on
a control plane node. With the appropriate permissions assigned to the
service account associated with the pod it runs in, though, it can
directly create the secret via the API.
There are actually two pieces of information required for a node to
join a cluster, though: a bootstrap token and the CA certificate. When
using the `kubeadm token create` command to issue a bootstrap token, it
also provides (a hash of) the CA certificate with the command it prints.
When creating the token manually, we need an alternative method for
obtaining and distributing the CA certificate, so we use the
`cluster-info` ConfigMap. This contains a stub `kubeconfig` file, which
includes the CA certificate, which can be used by the `kubeadm join`
command with a join configuration file. Generating both of these files
may be a bit more involved than computing the CA certificate hash and
passing that on the command line, but there are a couple of advantages.
First, it's more extensible, as the join configuration file can specify
additional configuration for the node (which we may want to use later).
It's also somewhat more secure, since the token is not passed as a
command-line argument.
Interestingly, the most difficult part of this implementation was
getting the expiration timestamp. Ansible exposes very little date math
capability; notably lacking is the ability to construct a `timedelta`
object, so the only way to get a timestamp in the future is to convert
the `datetime` object returned by `now` to a Unix timestamp and add some
number of seconds to it. Further, there is no direct way to get a
`datetime` object from the computed Unix timestamp value, but we can
rely on the fact that Python class methods can be called on instances,
too, so `now().fromtimestamp()` works the same as
`datetime.fromtimestamp()`.
I've become rather frusted witih Grafana Loki lately. It has several
bugs that affect my usage, including issues with counting and
aggregation, completely broken retention and cleanup, spamming itself
with bogus error log messages, and more. Now that VitoriaLogs has
first-class support in Grafana and support for alerts, it seems like a
good time to try it out. It's under very active development, with bugs
getting fixed extremely quickly, and new features added constantly.
Indeed, as I was experimenting with it, I thought, "it would be nice if
the web UI could decode ANSI escapes for terminal colors," and just a
few days later, that feature was added! Native support for syslog is
also a huge benefit, as it will allow me to collect logs directly from
network devices, without first collecting them into a file on the Unifi
controller.
This new role deploys VictoriaLogs in a manner very similar to how I
have Loki set up, as a systemd-managed Podman container. As it has no
built-in authentication or authorization, we rely on Caddy to handle
that. As with Loki, mTLS is used to prevent anonymous access to
querying the logs, however, authentication via Authelia is also an
option for human+browser usage. I'm re-using the same certificate
authority as with Loki to simplify Grafana configuration. Eventually, I
would like to have a more robust PKI, probably using OpenBao, at which
point I will (hopefully) have decided which log database I will be
using, and can use a proper CA for it.
HTTP 301 is "moved permanently." Browsers will cache this response and
never send the request to the real server again. We need to use a
temporary redirect, such as "see other" to avoid getting stuck in a
login loop.
Frigate has evolved a lot over the past year or so since v0.13.
Notably, some of the configuration options have been renamed, and
_events_ have become _alerts_ and _detections_. There's also now
support for authenication, though we don't need it because we're using
Authelia.
Although running `dnf` from the command line works without explicitly
configuring the proxy, because it inherits the environment variables set
by PAM on login from the user's shell, the `dnf` Ansible module does
not, as it does not inherit those variables. Thus, we need to
explicitly configure the `proxy` setting in `dnf.conf` in order to be
able to install packages via Ansible.
Since `dnf` does not have separate settings for different protocols
(e.g. HTTP, HTTPS, FTP), we need a way to specify which of the
configured proxies to use if there are multiple. As such, the
*useproxy* role will attempt to use the value of the `dnf_proxy`
variable, if it is set, falling back to `yum_proxy` and finally
`http_proxy`. This should cover most situations without any explicit
configuration, but allows flexibility for other cases.
The Unifi Network controller runs a syslog server (listening on UDP port
5514) where Unifi devices can send their logs. We need to open the port
in the firewall in order for it to receive log messages and write them
to disk.