Zigbee2MQTT now has a web GUI, which makes it *way* easier to manage the
Zigbee network. Now that I've got all the Philips Hue bulbs controlled
by Zigbee2MQTT instead of the Hue Hub, having access to the GUI is
awesome.
Gitea package names (e.g. OCI images, etc.) can contain `/` charactres.
These are encoded as %2F in request paths. Apache needs to forward
these sequences to the Gitea server without decoding them.
Unfortunately, the `AllowEncodedSlashes` setting, which controls this
behavior, is a per-virtualhost setting that is *not* inherited from the
main server configuration, and therefore must be explicitly set inside
the `VirtualHost` block. This means Gitea needs its own virtual host
definition, and cannot rely on the default virtual host.
I moved the metrics Pi from the red network to the blue network. I
started to get uncormfortable with the firewall changes that were
required to host a service on the red network. I think it makes the
most sense to define the red network as egress only.
The only major change that affects the configuration policy is the
introduction of the `webhook.ALLOWED_HOST_LIST` setting. For some dumb
reason, the default value of this setting *denies* access to machines on
the local network. This makes no sense; why do they expect you to host
your CI or whatever on a *public* network? Of course, the only reason
given is "for security reasons."
This work-around is no longer necessary as the default Fedora policy now
covers the Samba DC daemon. It never really worked correctly, anyway,
because Samba doesn't start `winbindd` fast enough for the
`/run/samba/winbindd` directory to be created before systemd spawns the
`restorecon` process, so it would usually fail to start the service the
first time after a reboot.
Sometimes, Frigate crashes in situations that should be recoverable or
temporary. For example, it will fail to start if the MQTT server is
unreachable initially, and does not attempt to connect more than once.
To avoid having to manually restart the service once the MQTT server is
ready, we can configure the systemd unit to enable automatic restarts.
If the *vaultwarden* service terminates unexpectedly, e.g. due to a
power loss, `podman` may not successfully remove the container. We
therefore need to try to delete it before starting it again, or `podman`
will exit with an error because the container already exists.
Both *zwavejs2mqtt* and *zigbee2mqtt* have various bugs that can cause
them to crash in the face of errors that should be recoverable.
Specifically, when there are network errors, the processes do not always
handle these well. Especially during first startup, they tend to crash
instead of retry. Thus, we'll move the retry logic into systemd.
The *zwavejs2mqtt* and *zigbee2mqtt* services need to wait until the
system clock is fully synchronized before starting. If the system clock
is wrong, they may fail to validate the MQTT server certificate.
The *time-sync.target* unit is not started until after services that
sync the clock, e.g. using NTP. Notably, the *chrony-wait.service* unit
delays *time-sync.target* until `chrony waitsync` returns.
The *vlan99* interface needs to be created and activated by
`systemd-networkd` before `dnsmasq` can start and bind to it. Ordering
the *dnsmasq.service* unit after *network.target* and
*network-online.target* should ensure that this is the case.
*libvirt*'s native autostart functionality does not work well for
machines that migrate between hosts. Machines lose their auto-start
flag when they are migrated, and the flag is not restored if they are
migrated back. This makes the feature pretty useless for us.
To work around this limitation, I've added a script that is run during
boot that will start the machines listed in `/etc/vm-autostart`, if they
exist. That file can also insert a delay between starting two machines,
which may be useful to allow services to fully start on one machine
before starting another that may depend on them.
If `/` is mounted read-only, as is usually the case, the Proton VPN
watchdog cannot update the `remote_addrs` configuration file. It needs
to be stored in a directory that is guaranteed to be writable.
The *netboot/basementhud* Ansible role configures two network block
devices for the basement HUD machine:
* The immutable root filesystem
* An ephemeral swap device
The *netboot/jenkins-agent* Ansible role configures three NBD exports:
* A single, shared, read-only export containing the Jenkins agent root
filesystem, as a SquashFS filesystem
* For each defined agent host, a writable data volume for Jenkins
workspaces
* For each defined agent host, a writable data volume for Docker
Agent hosts must have some kind of unique value to identify their
persistent data volumes. Raspberry Pi devices, for example, can use the
SoC serial number.
The *pxe* role configures the TFTP and NBD stages of PXE network
booting. The TFTP server provides the files used for the boot stage,
which may either be a kernel and initramfs, or another bootloader like
SYSLINUX/PXELINUX or GRUB. The NBD server provides the root filesystem,
typically mounted by code in early userspace/initramfs.
The *pxe* role also creates a user group called *pxeadmins*. Users in
this group can publish content via TFTP; they have write-access to the
`/var/lib/tftpboot` directory.
The *tftp* role installs the *tftp-server* package. There is
practically no configuration for the TFTP server. It "just works" out
of the box, as long as its target directory exists.
The *nbd-server* role configures a machine as a Network Block Device
(NDB) server, using the reference `nbd-server` implementation. It
configures a systemd socket unit to listen on the port and accept
incoming connections, and a template service unit for systemd to
instantiate and pass each incoming connection.
The reference `nbd-server` is actually not very good. It does not clean
up closed connections reliably, especially if the client disconnects
unexpectedly. Fortunately, systemd provides the necessary tools to work
around these bugs. Specifically, spawning one process per connection
allows processes to be killed externally. Further, since systemd
creates the listening socket, it can control the keep-alive interval.
By setting this to a rather low value, we can clean up server processes
for disconnected clients more quickly.
Configuration of the server itself is minimal; most of the configuration
is done on a per-export basis using drop-in configuration files. Other
Ansible roles should create these configuration files to configure
application-specific exports. Nothing needs to be reloaded or restarted
for changes to take effect; the next incoming connection will spawn a
new process, which will use the latest configuration file automatically.
Frigate needs to be able to connect to the MQTT immediately upon start
up or it will crash. Ordering the *frigate.service* unit after
*network-online.target* will help ensure Frigate starts when the system
boots.
The *systemd-resolved* role/playbook ensures the *systemd-resolved*
service is enabled and running, and ensures that the `/etc/resolv.conf`
file is a symlink to the appropriate managed configuration file.
The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The *metricspi* hosts several Victoria Metrics-adjacent applications.
These each expose their own HTTP interface that can be used for
debugging or introspecting state. To make these accessible on the
network, the *victoria-metrics-nginx* role now configures `proxy_pass`
directives for them in its nginx configuration.
The *scrape-collectd* role generates the
`/etc/prometheus/scrape-collectd.yml` file. This file can be read by
Prometheus/Victoria Metrics/vmagent to identify the hosts running
*collectd* with the *write_prometheus* plugin, using the
`files_sd_configs` scrape configuration option.
All hosts in the *collectd-prometheus* group are listed as scrape
targets.
*mtrcs0.pyrocufflink.red* is a Raspberry Pi CM4 on a Waveshare
CM4-IO-BASE-B carrier board with a NVMe SSD. It runs a custom OS built
using Buildroot, and is not a member of the *pyrocufflink.blue* AD
domain.
*mtrcs0.p.r* hosts Victoria Metrics/`vmagent`, `vmalert`, AlertManager,
and Grafana. I've created a unique group and playbook for it,
*metricspi*, to manage all these applications together.
The `grafana_ldap_root_ca_cert` can be used to set the path to the root
CA certificate (bundle) Grafana uses to validate the certificate
presented by the configured LDAP server. By default, Grafana uses the
system root CA trust store, but this variable can be used in situations
where this is not suitable.
`vmalert` is a component of Victoria Metrics. It handles alerting and
recording rules, periodically executing queries and dispatching alerts
or writing aggregated data back to the TSDB.
The Prometheus *blackbox_exporter* is a tool that can perform arbitrary,
generic ICMP, TCP, or HTTP "probes" against external services. This is
useful for applications that do not export their own metrics, and for
evaluating the health of protocol-level operations (e.g. TLS
certificate expiration).
The *blackbox-exporter* Ansible role installs and configures the
Blackbox Exporter on the target system. It fetches the specified binary
release from Github and copies it to the remote machine. It also
creates a systemd unit and configures the Blackbox exporter's "modules"
from the `blackbox_modules` Ansible variable.
Some hosts may not need this plugin, or may not have it installed.
Notably, it is not needed or used on my systems based on Buildroot,
since the only current use case for it is to keep track of the Fedora
version.
There are a few minor differences between the way Fedora and Buildroot
package *nginx*:
* Fedora uses a user named *nginx* while buildroot uses *www-data*
* Buildroot uses a Debian-like configuration layout (with
`sites-enabled` and `modules-enabled` directories)
This commit adjusts the *nginx* Ansible role to compensate for these
differences, eschewing Buildroot's configuration layout for the one used
by Fedora/Red Hat.
The *victoria-metrics* role deploys a single-server instance of the
Victoria Metrics time series database server. It installs the selected
version by downloading the binary release from Github and copying it to
`/usr/local/sbin` on the managed node. Scrape configuration is optional
and can be specified with the `scrape_configs` variable.
Tasks that configure the SELinux policy obviously only make sense if the
host uses SELinux. Similarly, if the host does not use FirewallD,
configuring firewall rules doesn't work.
The `/etc/collectd.d` directory is created by the RPM package on
machines running a Red Hat-based Linux distribution, but it may not
always be present on other machines.
In addition to ignoring particular types of filesystems, e.g. OverlayFS,
we can also ignore filesystems by their mount point. This could be
useful, for example, for bind-mounted directories, such as those used on
Kubernetes nodes.
By default, the *df* pluggin for collectd, which monitors filesystem
usage, collects data about all mounted filesystems. It can be
configured to ignore some filesystems, either by mount point, device, or
filesystem type. We will uses this capability to avoid collecting data
about OverlayFS mounts, because by definition, they do not represent a
real filesystem, but one or more other mounted filesystems. Collecting
data about these just creates useless metrics, especially on machines
that run containers.
Some machines, such as the nodes in the Kubernetes cluster, do not use
*firewalld*. For these machines, we need to skip the `firewalld` tasks,
as they will fail. The `host_uses_firewalld` variable can be set to
`False` for these machines to do so.