When the Home Assistant container restarts, Podman relabels the entire
`/var/lib/homeassistant` directory as `container_file_t`. Since the
*homeassistant* user's home directory is `/var/lib/homeassistant`, its
`~/.ssh` directory is thus also relabeled, preventing the SSH daemon
from accessing it. Since Home Assistant itself does not need access to
this path, we can tell systemd to mount an empty tmpfs filesystem there
in the service unit's mount namespace. This way, when Podman relabels
the directory, it will change the label of the tmpfs mount point instead
of the actual directory.
Most of the Synapse server's state is in its SQLite database. It also
has a `media_store` directory that needs to be backed up, though.
In order to back up the SQLite database while the server is running, the
database must be in "WAL mode." By default, Synapse leaves the database
in the default "rollback journal mode," which disallows multiple
processes from accessing the database, even for read-only operations.
To change the journal mode:
```sh
sudo systemctl stop synapse
sudo -u synapse sqlite3 /var/lib/synapse/homeserver.db 'PRAGMA journal_mode=WAL;'
sudo systemctl start synapse
```
The Samba KDC log file seems to grow rather quickly sometimes, outpacing
the monthly rotation policy. Let's rotate it weekly and keep 4
historical versions.
Kubernetes exports a *lot* of metrics in Prometheus format. I am not
sure what all is there, yet, but apparently several thousand time series
were added.
To allow anonymous access to the metrics, I added this RoleBinding:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
```
MinIO exposes metrics in Prometheus exposition format. By default, it
requires an authentication token to access the metrics, but I was unable
to get this to work. Fortunately, it can be configured to allow
anonymous access to the metrics, which is fine, in my opinion.
The `journal2ntfy.py` script follows the systemd journal by spawning
`journalctl` as a child process and reading from its standard output
stream. Any command-line arguments passed to `journal2ntfy` are passed
to `journalctl`, which allows the caller to specify message filters.
For any matching journal message, `journal2ntfy` sends a message via
the *ntfy* web service.
For the BURP server, we're going to use `journal2ntfy` to generate
alerts about the RAID array. When I reconnect the disk that was in the
fireproof safe, the kernel will log a message from the *md* subsystem
indicating that the resynchronization process has begun. Then, when
the disks are again in sync, it will log another message, which will
let me know it is safe to archive the other disk.
The `tls cafile` setting in `smb.conf` is not necessary. It is used for
verifying peer certificates for mutual TLS authentication, not to
specify the intermediate certificate authority chain like I thought.
The setting cannot simply be left out, though. If it is not specified,
Samba will attempt to load a file from a built-in default path, which
will fail, causing the server to crash. This is avoided by setting the
value to the empty string.
[MinIO][0] is an S3-compatible object storage server. It is designed to
provide storage for cloud-native applications for on-premises
deployments.
MinIO has not been packaged for Fedora (yet?). As such, the best way to
deploy it is usining its official container image. Here, we are using
`podman-systemd-generator` (Quadlet) to generate a systemd service
unit to manage the container process.
The HTTP->HTTPS redirect for chmod777.sh was only working by
coincidence. It needs its own virtual host to ensure it works
irrespective of how other websites are configured.
Tabitha's Hatch Learning Center site has two user submission forms: one
for signing in/out students for class, and another for parents to
register new students for the program. These are handled by
*formsubmit* and store data in CSV spreadsheets.
If the Python bindings for SELinux policy management are not installed
when Ansible gathers host facts, no SELinux-related facts will be set.
Thus, any tasks that are conditional based on these facts will not run.
Typically, such tasks are required for SELinux-enabled hosts, but must
not be performed for non-SELinux hosts. If they are not run when they
should, the deployment may fail or applications may experience issues at
runtime.
To avoid these potential issues, the *base* role now forces Ansible to
gather facts again if it installed the Python SELinux bindings.
Note: one might suggest using `meta: clear_facts` instead of `setup` and
letting Ansible decide if and when to gather facts again. Unfortunately,
this for some reason doesn't work; the `clear_facts` meta task just
causes Ansible to crash with a "shared connection to {host} closed."
The *dch-selinux* package contains customized SELinux policy modules.
I haven't worked out exactly how to build an publish it through a
continuous integration pipeline yet, so for now it's just hosted in my
user `public_html` folder on the main file server.
Samba AD DC does not implement [DFS-R for replication of the SYSVOL][0]
contents. This does not make much of a difference to me, since
the SYSVOL is really only used for Group Policy. Windows machines may
log an error if they cannot access the (basically empty) GPO files, but
that's pretty much the only effect if the SYSVOL is in sync between
domain controllers.
Unfortunately, there is one side-effect of the missing DFS-R
functionality that does matter. On domain controllers, all user,
computer, and group accounts need to have Unix UID/GID numbers mapped.
This is different than regular member machines, which only need UID/GID
numbers for users that will/are allowed to log into them. LDAP entries
only have ID numbers mapped for the latter class of users, which does
not include machine accounts. As a result, Samba falls back to
generating local ID numbers for the rest of the accounts. Those ID
numbers are stored in a local database file,
`/var/lib/samba/private/idmap.ldb`. It would seem that it wouldn't
actually matter if accounts have different ID numbers on different
domain controllers, but there are evidently [situations][1] where DCs
refuse to allocate ID numbers at all, which can cause authentication to
fail. As such, the `idmap.ldb` file needs to be kept in sync.
If we're going to go through the effort of synchronizing `idmap.ldb`, we
might as well keep the SYSVOL in sync as well. To that end, I've
written a script to synchronize both the SYSVOL contents and the
`idmap.ldb` file. It performs a simple one-way synchronization using
`rsync` from the DC with the PDC emulator role, as discovered using DNS
SRV records. To ensure the `idmap.ldb` file is in a consistent state,
it only copies the most recent backup file. If the copied file differs
from the local one, the script stops Samba and restores the local
database from the backup. It then flushes Samba's caches and restarts
the service. Finally, it fixes the NT ACLs on the contents of the
SYSVOL.
Since the contents of the SYSVOL are owned by root, naturally the
synchronization process has to run as root as well. To attempt to limit
the scope of control this would give the process, we use as much of the
systemd sandbox capabilities as possible. Further, the SSH key pairs
the DCs use to authenticate to one another are restricted to only
running rsync. As such, the `sysvolsync` script itself cannot run
`tdbbackup` to back up `idmap.ldb`. To handle that, I've created a
systemd service and corresponding timer unit to run `tdbbackup`
periodically.
I considered for a long time how to best implement this process, and
although I chose this naïve implementation, I am not exactly happy with
it. Since I do not fully understand *why* keeping
the `idmap.ldb` file in sync is necessary, there are undoubtedly cases
where blindly copying it from the PDC emulator is not correct. There
are definitely cases where the contents of the SYSVOL can be updated on
a DC besides the PDC emulator, but again, we should not run into them
because we don't really use the SYSVOL at all. In the end, I think this
solution is good enough for our needs, without being so complicated
[0]: https://wiki.samba.org/index.php?title=SysVol_replication_(DFS-R)&oldid=18120
[1]: https://lists.samba.org/archive/samba/2021-November/238370.html
Zigbee2MQTT now has a web GUI, which makes it *way* easier to manage the
Zigbee network. Now that I've got all the Philips Hue bulbs controlled
by Zigbee2MQTT instead of the Hue Hub, having access to the GUI is
awesome.
Gitea package names (e.g. OCI images, etc.) can contain `/` charactres.
These are encoded as %2F in request paths. Apache needs to forward
these sequences to the Gitea server without decoding them.
Unfortunately, the `AllowEncodedSlashes` setting, which controls this
behavior, is a per-virtualhost setting that is *not* inherited from the
main server configuration, and therefore must be explicitly set inside
the `VirtualHost` block. This means Gitea needs its own virtual host
definition, and cannot rely on the default virtual host.
I moved the metrics Pi from the red network to the blue network. I
started to get uncormfortable with the firewall changes that were
required to host a service on the red network. I think it makes the
most sense to define the red network as egress only.
The only major change that affects the configuration policy is the
introduction of the `webhook.ALLOWED_HOST_LIST` setting. For some dumb
reason, the default value of this setting *denies* access to machines on
the local network. This makes no sense; why do they expect you to host
your CI or whatever on a *public* network? Of course, the only reason
given is "for security reasons."
This work-around is no longer necessary as the default Fedora policy now
covers the Samba DC daemon. It never really worked correctly, anyway,
because Samba doesn't start `winbindd` fast enough for the
`/run/samba/winbindd` directory to be created before systemd spawns the
`restorecon` process, so it would usually fail to start the service the
first time after a reboot.
Sometimes, Frigate crashes in situations that should be recoverable or
temporary. For example, it will fail to start if the MQTT server is
unreachable initially, and does not attempt to connect more than once.
To avoid having to manually restart the service once the MQTT server is
ready, we can configure the systemd unit to enable automatic restarts.
If the *vaultwarden* service terminates unexpectedly, e.g. due to a
power loss, `podman` may not successfully remove the container. We
therefore need to try to delete it before starting it again, or `podman`
will exit with an error because the container already exists.
Both *zwavejs2mqtt* and *zigbee2mqtt* have various bugs that can cause
them to crash in the face of errors that should be recoverable.
Specifically, when there are network errors, the processes do not always
handle these well. Especially during first startup, they tend to crash
instead of retry. Thus, we'll move the retry logic into systemd.
The *zwavejs2mqtt* and *zigbee2mqtt* services need to wait until the
system clock is fully synchronized before starting. If the system clock
is wrong, they may fail to validate the MQTT server certificate.
The *time-sync.target* unit is not started until after services that
sync the clock, e.g. using NTP. Notably, the *chrony-wait.service* unit
delays *time-sync.target* until `chrony waitsync` returns.
The *vlan99* interface needs to be created and activated by
`systemd-networkd` before `dnsmasq` can start and bind to it. Ordering
the *dnsmasq.service* unit after *network.target* and
*network-online.target* should ensure that this is the case.
*libvirt*'s native autostart functionality does not work well for
machines that migrate between hosts. Machines lose their auto-start
flag when they are migrated, and the flag is not restored if they are
migrated back. This makes the feature pretty useless for us.
To work around this limitation, I've added a script that is run during
boot that will start the machines listed in `/etc/vm-autostart`, if they
exist. That file can also insert a delay between starting two machines,
which may be useful to allow services to fully start on one machine
before starting another that may depend on them.
If `/` is mounted read-only, as is usually the case, the Proton VPN
watchdog cannot update the `remote_addrs` configuration file. It needs
to be stored in a directory that is guaranteed to be writable.
The *netboot/basementhud* Ansible role configures two network block
devices for the basement HUD machine:
* The immutable root filesystem
* An ephemeral swap device
The *netboot/jenkins-agent* Ansible role configures three NBD exports:
* A single, shared, read-only export containing the Jenkins agent root
filesystem, as a SquashFS filesystem
* For each defined agent host, a writable data volume for Jenkins
workspaces
* For each defined agent host, a writable data volume for Docker
Agent hosts must have some kind of unique value to identify their
persistent data volumes. Raspberry Pi devices, for example, can use the
SoC serial number.
The *pxe* role configures the TFTP and NBD stages of PXE network
booting. The TFTP server provides the files used for the boot stage,
which may either be a kernel and initramfs, or another bootloader like
SYSLINUX/PXELINUX or GRUB. The NBD server provides the root filesystem,
typically mounted by code in early userspace/initramfs.
The *pxe* role also creates a user group called *pxeadmins*. Users in
this group can publish content via TFTP; they have write-access to the
`/var/lib/tftpboot` directory.
The *tftp* role installs the *tftp-server* package. There is
practically no configuration for the TFTP server. It "just works" out
of the box, as long as its target directory exists.
The *nbd-server* role configures a machine as a Network Block Device
(NDB) server, using the reference `nbd-server` implementation. It
configures a systemd socket unit to listen on the port and accept
incoming connections, and a template service unit for systemd to
instantiate and pass each incoming connection.
The reference `nbd-server` is actually not very good. It does not clean
up closed connections reliably, especially if the client disconnects
unexpectedly. Fortunately, systemd provides the necessary tools to work
around these bugs. Specifically, spawning one process per connection
allows processes to be killed externally. Further, since systemd
creates the listening socket, it can control the keep-alive interval.
By setting this to a rather low value, we can clean up server processes
for disconnected clients more quickly.
Configuration of the server itself is minimal; most of the configuration
is done on a per-export basis using drop-in configuration files. Other
Ansible roles should create these configuration files to configure
application-specific exports. Nothing needs to be reloaded or restarted
for changes to take effect; the next incoming connection will spawn a
new process, which will use the latest configuration file automatically.
Frigate needs to be able to connect to the MQTT immediately upon start
up or it will crash. Ordering the *frigate.service* unit after
*network-online.target* will help ensure Frigate starts when the system
boots.
The *systemd-resolved* role/playbook ensures the *systemd-resolved*
service is enabled and running, and ensures that the `/etc/resolv.conf`
file is a symlink to the appropriate managed configuration file.
The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The *metricspi* hosts several Victoria Metrics-adjacent applications.
These each expose their own HTTP interface that can be used for
debugging or introspecting state. To make these accessible on the
network, the *victoria-metrics-nginx* role now configures `proxy_pass`
directives for them in its nginx configuration.