When HAProxy binds to the IPv6 socket, it can handle both IPv6 and IPv4
clients. IPv4 clients are handled as IPv4-mapped IPv6 addresses, which
some backends (i.e. Apache) cannot support. To avoid this, we configure
HAProxy to bind to the IPv4 and IPv6 sockets separately, so that IPv4
addresses are handled as IPv4 addresses.
Expose a virtual host on a separate TCP port that uses the PROXY
protocol. This way, HAProxy can pass the original client IP address to
Jellyfin without terminating the TLS connection.
In order to enable authentication using LDAP over TLS in Jellyfin, we
need to expose the CA certificate that issues the LDAP server
certificates to the container.
The MinIO server for backups has special requirements for HTTPS. I want
to use subdomains for bucket names, so the certificate must have a
wildcard name, which requires using the DNS-01 challenge. Fortunately,
it is actually pretty easy to use `nsupdate` with GSS-TSIG
authentication to automate DNS record creation, and by default, all
domain-member machines can create any records. Thus, using the `manual`
auth plugin for `certbot` and a script to run `nsupdate`, obtaining the
wildcard certificate is fairly straightforward.
The biggest issue I encountered while developing this feature was
caching of NXDOMAIN responses. There doesn't seem to be a way to change
the TTL of the SOA record of the Active Directory DNS domain, which
defaults to 3600, meaning NXDOMAIN responses are always cached for an
hour. When adding a record using `nsupdate -g`, the tool always
performs a SOA lookup of new name to find the target zone for it. Since
the name does not exist yet, the domain controller responds with
NXDOMAIN, which gets cached by the main DNS server. Thus, even after
adding the record, the ACME server will not be able to resolve the
name for up to an hour. We can a void this by explicitly setting the
target zone. That would not work in a multi-domain forest, but
fortunately, we do not have to worry about that.
This role borrows some logic from the *postgresql-cert* role.
Eventually, I probably want to combine some of the steps from both of
these roles, possibly replacing the old *certbot* role.
The *minio-nginx* role configures nginx to proxy for MinIO. It uses the
"subdomain" pattern, as described in [Configure NGINX Proxy for MinIO
Server][0]; the S3 API and the console UI are accessible through
different domain names.
[0]: https://min.io/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html
Modern versions of Podman use Netavark, which needs to write various
files on the host file system (even when the container uses the
host's network namespace).
If the `minio_address` variable is specified, it will be passed with the
`--address` argument to `minio server`. This allows controlling the
socket the server binds to and listens on.
The `minio_browser_redirect_url` can be specified to populate the
similarly-named environment variable, which configures how MinIO serves
the web UI.
The `minio_domain` variable sets the `MINIO_DOMAIN` environment
variable, which enables DNS names (subdomains) for buckets, i.e.
`{bucket_name}.{MINIO_DOMAIN}`.
`wal-g` needs to connect to the PostgreSQL database system, so it should
run as the _postgres_ user, who has permission to connect, rather than
_root_, who does not.
Gitea needs SMTP configuration in order to send e-mail notifications
about e.g. pull requests. The `gitea_smtp` variable can be defined to
enable this feature.
Gitea complains if the `WORK_DIR` setting is not set. It tries to set
it itself, but fails because the configuration is read-only. The value
it uses is incorrect anyway (`/usr/local/bin`, since that's where the
`gitea` executable is).
I've already made a couple of mistakes keeping the HTTP and HTTPS rules
in sync. Let's define the sites declaratively and derive the HAProxy
rules from the data, rather then manually type the rules.
The *dch-proxy* role has not been used for quite some time. The web
server has been handling the reerse proxy functionality, in addition to
hosting websites. The drawback to using Apache as the reverse proxy,
though, is that it operates in TLS-terminating mode, so it needs to have
the correct certificate for every site and application it proxies for.
This is becoming cumbersome, especially now that there are several sites
that do not use the _pyrocufflink.net_ wildcard certificate. Notably,
Tabitha's _hatchlearningcenter.org_ is problematic because although the
main site are hosted by the web server, the Invoice Ninja client portal
is hosted in Kubernetes.
Switching back to HAProxy to provide the reverse proxy functionality
will eliminate the need to have the server certificate both on the
backend and on the reverse proxy, as it can operate in TLS-passthrough
mode. The main reason I stopped using HAProxy in the first place was
because when using TLS-passthrough mode, the original source IP address
is lost. Fortunately, HAProxy and Apache can both be configured to use
the PROXY protocol, which provides a mechanism for communicating the
original IP address while still passing through the TLS connection
unmodified. This is particularly important for Nextcloud because of its
built-in intrusion prevention; without knowing the actual source IP
address, it blocks _everyone_, since all connections appear to come from
the reverse proxy's IP address.
Combining TLS-passthrough mode with the PROXY protocol resolves both the
certificate management issue and the source IP address issue.
I've cleaned up the _dch-proxy_ role quite a bit in this commit.
Notably, I consolidated all the backend and frontend definitions into a
single file; it didn't really make sense to have them all separate,
since they were managed by the same role and referred to each other. Of
course, I had to update the backends to match the currently-deployed
applications as well.
The *postfix* role will now generate configuration and a lookup table
for [canonical address mapping][0] of email recipients. To configure
the mapping, the `postfix_recipient_canonical_map` must be a dictionary
of source-target addresses, e.g.:
```yaml
postfix_recipient_canonical_map:
my.bad.email@fake.test: my.real.email@example.com
```
[0]: https://www.postfix.org/ADDRESS_REWRITING_README.html#canonical
If winbind is unable to communicate with any domain controller, the
`pam_winbind.so` module will time out. In _auth_ and _account_ context,
this was not an issue, at least for local users, because other modules
terminated the stack before `pam_winbind.so` was called. In _session_
context, though, nothing terminated the stack at all, so
`pam_winbind.so` was called unconditionally. This prevented even _root_
from logging in on the console. This made troubleshooting difficult,
especially for the VM hosts, when the domain controllers were down.
Deploying Caddy as a reverse proxy for Frigate enables HTTPS with a
certificate issued by the internal CA (via ACME) and authentication via
Authelia.
Separating the installation and base configuratieon of Caddy into its
own role will allow us to reuse that part for other sapplications that
use Caddy for similar reasons.
The *gasket-dkms* package provides the `gasket` and `apex` kernel
modules, which are needed fro the Google Coral Edge TPU. Since these
are out-of-tree modules, they are not allowed in Fedora proper, so they
are provided in a COPR, and have to be rebuilt for every kernel version.
The DKMS framework handles automatically building the modules whenever
the kernel updates.
For systems usign UEFI with SecureBoot enabled, kernel modules must be
signed by a key trusted by the platform. For locally-built modules, we
can use the Machine Owner Key (MOK). Unfortunately, enrolling a new MOK
requires rebooting and manual intervention during the boot process.
Therefore, the *gasket-dkms* role has a `pause` step to ensure someone
is paying attention and able handle the key enrollment interactively.
Eventually, I'd like to have an RPM package with these modules
pre-built, so production servers do not need the kernel development
tools (`perl`, `gcc`, headers, etc.). It will be tricky, though, to
make sure the modules get rebuilt for every kernel version as Fedora
releases them.
* Switch to Quadlet-style `.container` for systemd unit
* Update to new image tag naming scheme (not arch-specific)
* Use environment variables for secrets
* Allow the entire `frigate_config` variable to be overridden
The *useproxy* role configures the `http_proxy` et al. environmet
variables for systemd services and interactive shells. Additionally, it
configures Yum repositories to use a single mirror via the `baseurl`
setting, rather than a list of mirrors via `metalink`, since the proxy
a) the proxy only allows access to _dl.fedoraproject.org_ and b) the
proxy caches RPM files, but this is only effective if all clients use
the same mirror all the time.
The `useproxy.yml` playbook applies this role to servers in the
*needproxy* group.
Although it's rare, sometimes Samba crashes or fails to start. When
this happens, restarting it is almost always enough to get it working
again. Since all sorts of authentication problems can occur if one of
the domain controllers is down, it's probably best to just have systemd
automatically restart _samba.service_ if it ever stops for any reason.
The [postgres-exporter][0] exposes PostgreSQL server statistics to
Prometheus. It connects to a specified PostgreSQL server (in this
case, a server on the local machine via UNIX socket) and collects data
from the `pg_stat_activity`, et al. views. It needs the `pg_monitor`
role in order to be allowed to read the relevant metrics.
Since we're setting up the exporter to connect via UNIX socket, it needs
a dedicated OS user to match the PostgreSQL user in order to
authenticate via the _peer_ method.
[0]: https://github.com/prometheus-community/postgres_exporter/
WAL archives are not much good without a base backup onto which they
can be applied. Thus, we need to schedule WAL-G to create and upload a
backup periodically.
This role installs `wal-g` from the DCH Yum repository, and creates a
configuration file for it in `/etc/postgresql`. Additionally, it
installs a custom SELinux policy module that allows `wal-g` to run in
the `postgresql_t` domain (i.e. when spawned by the PostgreSQL server).
This role can be used to get a server certificate for PostgreSQL from an
ACME CA using `certbot`. It fetches the initial certificate and copies
it to the PostgreSQL configuration directory. It also sets up a
post-renewal hook script that copies updated certificates and reload
the server.
This rewrite brings a lot of improvements and new functionality to the
*postgresql-server* role. The most noticeable change is the
introduction of the `postgresql_config_dir` variable, which can be used
to specify a different location for the PostgreSQL server configuration
files, separate from the data storage directory. By default, the
variable is set to `/etc/postgresql`. For some situations, it may be
necessary to disable this functionality, which can be accomplished by
setting the value of `postgresql_config_dir` to the same path as
`pgdata_dir`. Note also that the `postgresql-setup` tool, and the
corresponding `postgresql-check-db-dir` script, which are included in
the Fedora/Red Hat distribution of PostgreSQL, do not support having
separate configuration and data directories, so their use has to be
disabled.
Another significant improvement is to how the `postgresql.conf` file
is generated. Any setting can be set now, using the `postgresql_config`
variable; any key in this dictionary will be written to the
configuration file. Note that configuration file syntax requires
single quotes around string values, so these have to be included in the
YAML value.
To support deploying standby servers, the role now supports running a
command to restore from a backup instead of running `initdb`.
Additionally, the `postgresql_standby` variable can be set to `true`
to create the `recovery.signal` file, configuring the server as a
standby.
Sending SIGHUP to the main PID (i.e. conmon) ends up stopping the
service. What we really want is to send the signal to main PID _inside_
the container. We can achieve this by using `podman kill` instead of
`kill`.
Without making the firewall changes permanent, when a server tries to
renew its certificate after rebooting, it will fail as the ACME server
cannot connect to the HTTP port.
Sometimes, the `collectd-version` script crashes or fails to start at
boot. Configuring systemd to automatically restart it will ensure that
it's always running, so machines' versions are consistently inventoried.
The `squid.service` systemd unit now correctly initializes the
configured cache directories, so we do not need to do it explicitly
before starting the server.
The *samba-cert* role configures `lego` and HAProxy to obtain an X.509
certificate via the ACME HTTP-01 challenge. HAProxy is necessary
because LDAP server certificates need to have the apex domain in their
SAN field, and the ACME server may contact *any* domain controller
server with an A record for that name. HAProxy will forward the
challenge request on to the first available host on port 5000, where
`lego` is listening to provide validation.
Issuing certificates this way has a couple of advantages:
1. No need for the wildcard certificate for the *pyrocufflink.blue*
domain any more
2. Renewals are automatic and handled by the server itself rather than
Ansible via scheduled Jenkins job
Item (2) is particularly interesting because it avoids the bi-monthly
issue where replacing the LDAP server certificate and restarting Samba
causes the Jenkins job to fail.
Naturally, for this to work correctly, all LDAP client applications
need to trust the certificates issued by the ACME server, in this case
*DCH Root CA R2*.
HAProxy uses a special configuration block, `resolvers`, to specify
how it should look up names in DNS. This configuration is used for
e.g. dynamically discovering backend servers via DNS A or SRV records.
Since resolvers are global, they need to be specified in the global
configuration file, rather than a per-application drop-in.
We will use this functionality for the ACME HTTP-01 challenge solver
for Samba AD domain controllers.
The current version of *haproxy* packaged in Fedora already enables
configuration via fragments in a drop-in directory, though it uses
a different path by default. I still like separating the global
configuration from the defaults, though, and keeping the main
`haproxy.cfg` file empty.
*dnf-automatic* is an add-on for `dnf` that performs scheduled,
automatic updates. It works pretty much how I would want it to:
triggered by a systemd timer, sends email reports upon completion, and
only reboots for kernel et al. updates.
In its default configuration, `dnf-automatic.timer` fires every day. I
want machines to update weekly, but I want them to update on different
days (so as to avoid issues if all the machines reboot at once). Thus,
the _dnf-automatic_ role uses a systemd unit extension to change the
schedule. The day-of-the-week is chosen pseudo-randomly based on the
host name of the managed system.
Even with `Network=host`, Podman tries to write to
`/etc/containers/network` for some reason. Fortunately, it doesn't
actually need to, so we can trick it into working by mounting an empty
*tmpfs* filesystem there.
Even with `Network=host`, Podman tries to write to
`/etc/containers/network` for some reason. Fortunately, it doesn't
actually need to, so we can trick it into working by mounting an empty
*tmpfs* filesystem there.
The summer 2024 enrollment form is more complicated than the other
forms on the HLC site, as it integrates directly with Invoice Ninja. As
such, it's handled by a different backend, which runs in Kubernetes.
The *promtail* service runs as an unprivileged user by default, which is
fine in most cases (i.e. when scraping only the Journal), but may not
always be sufficient to read logs from other files. Rather than run
Promtail as root in these cases, we can assign it the
CAP_DAC_READ_SEARCH capability, which will allow it to read any file,
but does not grant it any of root's other privileges.
To enable this functionality, the `promtail_dac_read_search` Ansible
variable can be set to `true` for a host or group. This will create a
systemd unit configuration extension that configures the service to have
the CAP_DAC_READ_SEARCH capability in its ambient set.
*unifi1.pyrocufflink.blue* requires a proxy to access Yum repositories
on the Internet, so it has the `proxy` setting configured globally. The
proxy does NOT allow access to internal resources, however. The
internal repository is directly accessible by that machine, so it needs
to be configured thus.
The Squid "cache log" is where it writes general debug and error
messages. It is distinct from the "access log," which is where it
writes the status of every proxy request. We already had the latter
configured to go to syslog by default (so it would be captured in the
journal), but missed the former.
Promtail is the log sending client for Grafana Loki. For traditional
Linux systems, an RPM package is available from upstream, making
installation fairly simple. Configuration is stored in a YAML file, so
again, it's straightforward to configure via Ansible variables. Really,
the only interesting step is adding the _promtail_ user, which is
created by the RPM package, to the _systemd-journal_ group, so that
Promtail can read the systemd journal files.
The *dustinandtabitha.com* website no longer uses *formsubmit* (the time
for RSVP has **long** passed). Removing the configuration so the
file encrypted with Ansible Vault can go away.
A few hosts have `AuthorizedKeysCommand` set in their *sshd(8)*
configuration. This was my first attempt at centrally managing SSH
keys, using a script which fetched a list of keys for each user from an
HTTP server. This worked most of the time, but I didn't take good care
of the HTTP server, so the script would fail frequently. Now that all
hosts trust the SSH user CA, there is no longer any need for this
"feature."
The `TrustedUserCAKeys` setting for *sshd(8)* tells the server to accept
any certificates signed by keys listed in the specified file.
The authenticating username has to match one of the principals listed in
the certificate, of course.
This role is applied to all machines, via the `base.yml` playbook.
Certificates issued by the user CA managed by SSHCA will therefore be
trusted everywhere. This brings us one step closer to eliminating the
dependency on Active Directory/Samba.
Normal users do not need shell access to the file server, and certainly
should not be allowed to e.g. forward ports through it. Using a `Match`
block, we can apply restrictions to users who do not need administrative
functionality. In this case, we restrict everyone who is not a member
of the *Server Admins* group in the PYROCUFFLINK AD domain.
The [pam_ssh_agent_auth][0] PAM module authenticates users using keys in
their SSH agent. Using SSH agent forwarding, it can even authenticate
users with keys on a remote system. By adding it to the PAM stack for
`sudo`, we can configure the latter to authenticate users without
requiring a password. For servers especially, this is significantly
more secure than configuring `sudo` not to require a password, while
still being almost as convenient.
For this to work, users need to enable SSH agent forwarding on their
clients, and their public keys have to be listed in the
`/etc/security/sudo.authorized_keys` file. Additionally, although the
documentation suggests otherwise, the `SSH_AUTH_SOCK` environment
variable has to be added to the `env_keep` list in *sudoers(5)*.
[0]: https://github.com/jbeverly/pam_ssh_agent_auth
Running `squid -z` as *root* leaves behind temporary files in
`/dev/shm`. When *squid.service* starts squid, in the proper SELinux
domain, it is unable to access these files and crashes. To avoid this,
we mount a private *tmpfs* so no existing files are accessible in the
service's namespace.
Instead of hard-coding a single cache directory and a set of refresh
patterns, the *squid* role can now have custom cache rules defined with
the `squid_cache_dir` variable (which now takes a list of `cache_dir`
settings) and the `squid_refresh_pattern` variable (which takes a list
of refresh patterns). If neither of these are defined, the default
configuration is used.
This is a breaking change, since `squid_cache_dir` used to refer to a
directory, and the previous default was to configure one cache path.
There are no extant users of this role, though, so it doesn't really
matter.
The default set of access control lists and access rules for Squid are
fine for allowing hosts on the local network access to the web in
general. For other uses, such as web filtering, etc. more complex rules
may be needed. To that end, the *squid* role now supports some
additional variables. Notably, `squid_acl` contains a map of ACL names
to list entries and `squid_http_access` contains a list of access rules.
If these are set, their corresponding defaults are not included in the
rendered configuration file.
Recent versions of NUT have a *nut.target* unit that collects all of the
NUT-related services. Enabling any of the services individually does
effectively nothing, as it only adds the service as a `Wants` dependency
for *nut.target*, and that unit already has dependencies for all of
them. Thus, in order for the service to start at boot, *nut.target* has
to be enabled instead.
In situations where only *nut-monitor* should be enabled, enabling
*nut.target* is inappropriate, since that enables *nut-driver* and
*nut-server* as well. It's not clear why upstream made this change (it
was part of a [HUGE pull request][0]), but restoring the desired
behavior is easy enough by clearing the dependencies from *nut.target*.
Services that we want to start automatically can still be enabled
individually, and will start as long as *nut.target* is enabled.
[0]: https://github.com/networkupstools/nut/pull/330
The Synapse server can sometimes take a very long time to start.
Increasing the start timeout should keep it from failing to come up when
the machine is under load.
The UniFi controller service can sometimes take a really long time to
start up. This most frequently happens after a full outage, when the VM
hosts are very busy bringing everything up.
The `vm-autostart` script fails with `bad system call` errors when
trying to start libvirt domains. Removing the system call filters works
around this. Ideally, we should figure out exactly which system call is
being rejected and allow it, but that's rather difficult to do and
probably not really worth the effort in this case.
The MinIO service often fails to start from a cold boot. Delaying
starting the service until the network is online, plus increasing the
startup timeout, should help with this. If not, enabling auto restart
will let systemd try to start the service again if it still fails to
come up on time.
`upsmon` is the component of [NUT] that monitors (local or remote) UPS
devices and reacts to changes in their state. Notably, it is
responsible for powering down the system when there is insufficient
power to the system.
I've moved the Dark Chest of Wonders website to run in a container on
Kubernetes. This will keep it from breaking every time the OS is
updated on the web server, when the version of Python in Fedora changes.
Recent(-ish) versions of Fedora have a drop-in configuration directory
for `sshd`. This allows applications, etc. to define certain settings
for the SSH server, without having to manage the entire server
configuration. For Gitea specifically, we only need to set a few
settings for the *gitea* user, leaving the remaining settings alone.
This commit does not include any migration to undo the settings that
were originally set, but that should be as simple as `mv
/etc/ssh/sshd_config.rpmnew /etc/ssh/sshd_config && systemctl reload
sshd`.
The *ssh-host-certs* role, which is now applied as part of the
`base.yml` playbook and therefore applies to all managed nodes, is
responsible for installing the *sshca-cli* package and using it to
request signed SSH host certificates. The *sshca-cli-systemd*
sub-package includes systemd units that automate the process of
requesting and renewing host certificates. These units need to be
enabled and provided the URL of the SSHCA service. Additionally, the
SSH daemon needs to be configured to load the host certificates.
The *dch* repository, hosted on *file0.pyrocufflink.blue* and managed by
the *repohost* Ansible role, is where I plan to host RPM packages for
internal use (e.g. *sshca-cli*, *dch-selinux*, etc.). The *dch-yum*
role configures Yum/dnf to use this repository. Roles that need to
install a package from here will list this role as a dependency.
So it turns out Gitea's RPM package repository feature is less than
stellar. Since each organization/user can only have a single
repository, separating packages by OS would be extremely cumbersome.
Presumably, the feature was designed for projects that only build a
single PRM for each version, but most of my packages need multiple
builds, as they tend to link to system libraries. Further, only the
repository owner can publish to user-scoped repositories, so e.g.
Jenkins cannot publish anything to a repository under my *dustin*
account. This means I would ultimately have to create an Organization
for every OS/version I need to support, and make Jenkins a member of it.
That sounds tedious and annoying, so I decided against using that
feature for internal packages.
Instead, I decided to return to the old ways, publishing packages with
`rsync` and serving them with Apache. It's fairly straightforward to
set this up: just need a directory with the appropriate permissions for
users to upload packages, and configure Apache to serve from it.
One advantage Gitea's feature had over a plain directory is its
automatic management of repository metadata. Publishers only have to
upload the RPMs they want to serve, and Gitea handles generating the
index, database, etc. files necessary to make the packages available to
Yum/dnf. With a plain file host, the publisher would need to use
`createrepo` to generate the repository metadata and upload that as
well. For repositories with multiple packages, the publisher would need
a copy of every RPM file locally in order for them to be included in the
repository metadata. This, too, seems like it would be too much trouble
to be tenable, so I created a simple automatic metadata manager for the
file-based repo host. Using `inotifywatch`, the `repohost-createrepo`
script watches for file modifications in the repository base directory.
Whenever a file is added or changed, the directory containing it is
added to a queue. Every thirty seconds, the queue is processed; for
each unique directory in the queue, repository metadata are generated.
This implementation combines the flexibility of a plain file host,
supporting an effectively unlimited number of repositories with
fully-configurable permissions, and the ease of publishing of a simple
file upload.
The `net cache flush` command does not seem to always work to clear the
identity mapping cache used by winbind. Explicitly moving the file
does, though.
On a new DC, the `idmap.ldb` file does not yet exist the first time
`sysvolsync` runs. This causes a syntax error in the condition that
checks the modification timestamp of the file.
Forcing the PDC lookup to use localhost as the DNS server does not work
when first adding a new domain controller, as the `sysvolsync` script
runs before Samba starts. There isn't much advantage to using the local
DNS server over the system-defined server anyway.
The `winbind offline login` setting seems to cause issues when one of
the domain controllers is offline. Rather than try the other DC,
winbind seems to just "give up" and return NT_STATUS_NO_SUCH_USER for
all authentication requests until the offline cache is flushed. There's
not really any reason to use this setting on servers anyway, since they
are always connected to the LAN, as opposed to laptops that may
occasionally disconnect. Let's disable this option in the hopes that it
makes logins more resilient to DC downtime. After all, there's not much
point in having multiple DCs if they all have to be available in order
to log in.
AWS is going to begin charging extra for routable IPv4 addresses soon.
There's really no point in having a relay in the cloud anymore anyway,
since a) all outbound messages are sent via the local relay and b) no
messages are sent to anyone except me.
The *authconfig* package has been gone from Fedora since ages. There's
no reason to have this no-op step any more, especially since it has the
side-effect of making a network request to refresh the dnf cache.
*nvr1.pyrocufflink.blue* has been migrated to Fedora CoreOS. As such,
it is no longer managed by Ansible; its configuration is done via
Butane/Ignition. It is no longer a member of the Active Directory
domain, but it does still run *collectd* and export Prometheus metrics.
The `scrape_collectd_extra_targets` variable can be used to specify a
list of additional targets to scrape, in addition to the hosts in the
*collectd-prometheus* group. This will allow us to scrape hosts that
are not managed by the configuration policy, but still expose Prometheus
metrics via collectd.
MinIO is supposed to automatically reload itself when the certificate
changes, but this does not appear to happen in all cases. To ensure the
updated certificate gets used, we need to send SIGHUP to the MinIO
server process.
Since Jellyfin is running on the file server, which also hosts a few
other websites that do not define virtual hosts, the HTTP-to-HTTPS
redirect was applied to *all* requests. To avoid this, we simply add a
rewrite condition so that the redirect only applies to requests for
Jellyfin.
Jellyfin is a multimedia library manager. Clients can browse and stream
music, movies, and TV shows from the server and play them locally
(including in the browser).
Since Ubiquiti only publishes Debian packages for the Unifi Network
controller software, running it on Fedora has historically been neigh
impossible. Fortunately, a modern solution is available: containers.
The *linuxserver.io* project publishes a container image for the
controller software, making it fairly easy to deploy on any host with an
OCI runtime. I briefly considered creating my own image, since theirs
must be run as root, but I decided the maintenance burden would not be
worth it. Using Podman's user namespace functionality, I was able to
work around this requirement anyway.
When the Home Assistant container restarts, Podman relabels the entire
`/var/lib/homeassistant` directory as `container_file_t`. Since the
*homeassistant* user's home directory is `/var/lib/homeassistant`, its
`~/.ssh` directory is thus also relabeled, preventing the SSH daemon
from accessing it. Since Home Assistant itself does not need access to
this path, we can tell systemd to mount an empty tmpfs filesystem there
in the service unit's mount namespace. This way, when Podman relabels
the directory, it will change the label of the tmpfs mount point instead
of the actual directory.
Most of the Synapse server's state is in its SQLite database. It also
has a `media_store` directory that needs to be backed up, though.
In order to back up the SQLite database while the server is running, the
database must be in "WAL mode." By default, Synapse leaves the database
in the default "rollback journal mode," which disallows multiple
processes from accessing the database, even for read-only operations.
To change the journal mode:
```sh
sudo systemctl stop synapse
sudo -u synapse sqlite3 /var/lib/synapse/homeserver.db 'PRAGMA journal_mode=WAL;'
sudo systemctl start synapse
```
The Samba KDC log file seems to grow rather quickly sometimes, outpacing
the monthly rotation policy. Let's rotate it weekly and keep 4
historical versions.
Kubernetes exports a *lot* of metrics in Prometheus format. I am not
sure what all is there, yet, but apparently several thousand time series
were added.
To allow anonymous access to the metrics, I added this RoleBinding:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
```
MinIO exposes metrics in Prometheus exposition format. By default, it
requires an authentication token to access the metrics, but I was unable
to get this to work. Fortunately, it can be configured to allow
anonymous access to the metrics, which is fine, in my opinion.
The `journal2ntfy.py` script follows the systemd journal by spawning
`journalctl` as a child process and reading from its standard output
stream. Any command-line arguments passed to `journal2ntfy` are passed
to `journalctl`, which allows the caller to specify message filters.
For any matching journal message, `journal2ntfy` sends a message via
the *ntfy* web service.
For the BURP server, we're going to use `journal2ntfy` to generate
alerts about the RAID array. When I reconnect the disk that was in the
fireproof safe, the kernel will log a message from the *md* subsystem
indicating that the resynchronization process has begun. Then, when
the disks are again in sync, it will log another message, which will
let me know it is safe to archive the other disk.
The `tls cafile` setting in `smb.conf` is not necessary. It is used for
verifying peer certificates for mutual TLS authentication, not to
specify the intermediate certificate authority chain like I thought.
The setting cannot simply be left out, though. If it is not specified,
Samba will attempt to load a file from a built-in default path, which
will fail, causing the server to crash. This is avoided by setting the
value to the empty string.
[MinIO][0] is an S3-compatible object storage server. It is designed to
provide storage for cloud-native applications for on-premises
deployments.
MinIO has not been packaged for Fedora (yet?). As such, the best way to
deploy it is usining its official container image. Here, we are using
`podman-systemd-generator` (Quadlet) to generate a systemd service
unit to manage the container process.
The HTTP->HTTPS redirect for chmod777.sh was only working by
coincidence. It needs its own virtual host to ensure it works
irrespective of how other websites are configured.
Tabitha's Hatch Learning Center site has two user submission forms: one
for signing in/out students for class, and another for parents to
register new students for the program. These are handled by
*formsubmit* and store data in CSV spreadsheets.
If the Python bindings for SELinux policy management are not installed
when Ansible gathers host facts, no SELinux-related facts will be set.
Thus, any tasks that are conditional based on these facts will not run.
Typically, such tasks are required for SELinux-enabled hosts, but must
not be performed for non-SELinux hosts. If they are not run when they
should, the deployment may fail or applications may experience issues at
runtime.
To avoid these potential issues, the *base* role now forces Ansible to
gather facts again if it installed the Python SELinux bindings.
Note: one might suggest using `meta: clear_facts` instead of `setup` and
letting Ansible decide if and when to gather facts again. Unfortunately,
this for some reason doesn't work; the `clear_facts` meta task just
causes Ansible to crash with a "shared connection to {host} closed."
The *dch-selinux* package contains customized SELinux policy modules.
I haven't worked out exactly how to build an publish it through a
continuous integration pipeline yet, so for now it's just hosted in my
user `public_html` folder on the main file server.
Samba AD DC does not implement [DFS-R for replication of the SYSVOL][0]
contents. This does not make much of a difference to me, since
the SYSVOL is really only used for Group Policy. Windows machines may
log an error if they cannot access the (basically empty) GPO files, but
that's pretty much the only effect if the SYSVOL is in sync between
domain controllers.
Unfortunately, there is one side-effect of the missing DFS-R
functionality that does matter. On domain controllers, all user,
computer, and group accounts need to have Unix UID/GID numbers mapped.
This is different than regular member machines, which only need UID/GID
numbers for users that will/are allowed to log into them. LDAP entries
only have ID numbers mapped for the latter class of users, which does
not include machine accounts. As a result, Samba falls back to
generating local ID numbers for the rest of the accounts. Those ID
numbers are stored in a local database file,
`/var/lib/samba/private/idmap.ldb`. It would seem that it wouldn't
actually matter if accounts have different ID numbers on different
domain controllers, but there are evidently [situations][1] where DCs
refuse to allocate ID numbers at all, which can cause authentication to
fail. As such, the `idmap.ldb` file needs to be kept in sync.
If we're going to go through the effort of synchronizing `idmap.ldb`, we
might as well keep the SYSVOL in sync as well. To that end, I've
written a script to synchronize both the SYSVOL contents and the
`idmap.ldb` file. It performs a simple one-way synchronization using
`rsync` from the DC with the PDC emulator role, as discovered using DNS
SRV records. To ensure the `idmap.ldb` file is in a consistent state,
it only copies the most recent backup file. If the copied file differs
from the local one, the script stops Samba and restores the local
database from the backup. It then flushes Samba's caches and restarts
the service. Finally, it fixes the NT ACLs on the contents of the
SYSVOL.
Since the contents of the SYSVOL are owned by root, naturally the
synchronization process has to run as root as well. To attempt to limit
the scope of control this would give the process, we use as much of the
systemd sandbox capabilities as possible. Further, the SSH key pairs
the DCs use to authenticate to one another are restricted to only
running rsync. As such, the `sysvolsync` script itself cannot run
`tdbbackup` to back up `idmap.ldb`. To handle that, I've created a
systemd service and corresponding timer unit to run `tdbbackup`
periodically.
I considered for a long time how to best implement this process, and
although I chose this naïve implementation, I am not exactly happy with
it. Since I do not fully understand *why* keeping
the `idmap.ldb` file in sync is necessary, there are undoubtedly cases
where blindly copying it from the PDC emulator is not correct. There
are definitely cases where the contents of the SYSVOL can be updated on
a DC besides the PDC emulator, but again, we should not run into them
because we don't really use the SYSVOL at all. In the end, I think this
solution is good enough for our needs, without being so complicated
[0]: https://wiki.samba.org/index.php?title=SysVol_replication_(DFS-R)&oldid=18120
[1]: https://lists.samba.org/archive/samba/2021-November/238370.html
Zigbee2MQTT now has a web GUI, which makes it *way* easier to manage the
Zigbee network. Now that I've got all the Philips Hue bulbs controlled
by Zigbee2MQTT instead of the Hue Hub, having access to the GUI is
awesome.
Gitea package names (e.g. OCI images, etc.) can contain `/` charactres.
These are encoded as %2F in request paths. Apache needs to forward
these sequences to the Gitea server without decoding them.
Unfortunately, the `AllowEncodedSlashes` setting, which controls this
behavior, is a per-virtualhost setting that is *not* inherited from the
main server configuration, and therefore must be explicitly set inside
the `VirtualHost` block. This means Gitea needs its own virtual host
definition, and cannot rely on the default virtual host.
I moved the metrics Pi from the red network to the blue network. I
started to get uncormfortable with the firewall changes that were
required to host a service on the red network. I think it makes the
most sense to define the red network as egress only.
The only major change that affects the configuration policy is the
introduction of the `webhook.ALLOWED_HOST_LIST` setting. For some dumb
reason, the default value of this setting *denies* access to machines on
the local network. This makes no sense; why do they expect you to host
your CI or whatever on a *public* network? Of course, the only reason
given is "for security reasons."
This work-around is no longer necessary as the default Fedora policy now
covers the Samba DC daemon. It never really worked correctly, anyway,
because Samba doesn't start `winbindd` fast enough for the
`/run/samba/winbindd` directory to be created before systemd spawns the
`restorecon` process, so it would usually fail to start the service the
first time after a reboot.
Sometimes, Frigate crashes in situations that should be recoverable or
temporary. For example, it will fail to start if the MQTT server is
unreachable initially, and does not attempt to connect more than once.
To avoid having to manually restart the service once the MQTT server is
ready, we can configure the systemd unit to enable automatic restarts.
If the *vaultwarden* service terminates unexpectedly, e.g. due to a
power loss, `podman` may not successfully remove the container. We
therefore need to try to delete it before starting it again, or `podman`
will exit with an error because the container already exists.
Both *zwavejs2mqtt* and *zigbee2mqtt* have various bugs that can cause
them to crash in the face of errors that should be recoverable.
Specifically, when there are network errors, the processes do not always
handle these well. Especially during first startup, they tend to crash
instead of retry. Thus, we'll move the retry logic into systemd.
The *zwavejs2mqtt* and *zigbee2mqtt* services need to wait until the
system clock is fully synchronized before starting. If the system clock
is wrong, they may fail to validate the MQTT server certificate.
The *time-sync.target* unit is not started until after services that
sync the clock, e.g. using NTP. Notably, the *chrony-wait.service* unit
delays *time-sync.target* until `chrony waitsync` returns.
The *vlan99* interface needs to be created and activated by
`systemd-networkd` before `dnsmasq` can start and bind to it. Ordering
the *dnsmasq.service* unit after *network.target* and
*network-online.target* should ensure that this is the case.
*libvirt*'s native autostart functionality does not work well for
machines that migrate between hosts. Machines lose their auto-start
flag when they are migrated, and the flag is not restored if they are
migrated back. This makes the feature pretty useless for us.
To work around this limitation, I've added a script that is run during
boot that will start the machines listed in `/etc/vm-autostart`, if they
exist. That file can also insert a delay between starting two machines,
which may be useful to allow services to fully start on one machine
before starting another that may depend on them.
If `/` is mounted read-only, as is usually the case, the Proton VPN
watchdog cannot update the `remote_addrs` configuration file. It needs
to be stored in a directory that is guaranteed to be writable.
The *netboot/basementhud* Ansible role configures two network block
devices for the basement HUD machine:
* The immutable root filesystem
* An ephemeral swap device
The *netboot/jenkins-agent* Ansible role configures three NBD exports:
* A single, shared, read-only export containing the Jenkins agent root
filesystem, as a SquashFS filesystem
* For each defined agent host, a writable data volume for Jenkins
workspaces
* For each defined agent host, a writable data volume for Docker
Agent hosts must have some kind of unique value to identify their
persistent data volumes. Raspberry Pi devices, for example, can use the
SoC serial number.
The *pxe* role configures the TFTP and NBD stages of PXE network
booting. The TFTP server provides the files used for the boot stage,
which may either be a kernel and initramfs, or another bootloader like
SYSLINUX/PXELINUX or GRUB. The NBD server provides the root filesystem,
typically mounted by code in early userspace/initramfs.
The *pxe* role also creates a user group called *pxeadmins*. Users in
this group can publish content via TFTP; they have write-access to the
`/var/lib/tftpboot` directory.
The *tftp* role installs the *tftp-server* package. There is
practically no configuration for the TFTP server. It "just works" out
of the box, as long as its target directory exists.
The *nbd-server* role configures a machine as a Network Block Device
(NDB) server, using the reference `nbd-server` implementation. It
configures a systemd socket unit to listen on the port and accept
incoming connections, and a template service unit for systemd to
instantiate and pass each incoming connection.
The reference `nbd-server` is actually not very good. It does not clean
up closed connections reliably, especially if the client disconnects
unexpectedly. Fortunately, systemd provides the necessary tools to work
around these bugs. Specifically, spawning one process per connection
allows processes to be killed externally. Further, since systemd
creates the listening socket, it can control the keep-alive interval.
By setting this to a rather low value, we can clean up server processes
for disconnected clients more quickly.
Configuration of the server itself is minimal; most of the configuration
is done on a per-export basis using drop-in configuration files. Other
Ansible roles should create these configuration files to configure
application-specific exports. Nothing needs to be reloaded or restarted
for changes to take effect; the next incoming connection will spawn a
new process, which will use the latest configuration file automatically.
Frigate needs to be able to connect to the MQTT immediately upon start
up or it will crash. Ordering the *frigate.service* unit after
*network-online.target* will help ensure Frigate starts when the system
boots.
The *systemd-resolved* role/playbook ensures the *systemd-resolved*
service is enabled and running, and ensures that the `/etc/resolv.conf`
file is a symlink to the appropriate managed configuration file.
The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The *metricspi* hosts several Victoria Metrics-adjacent applications.
These each expose their own HTTP interface that can be used for
debugging or introspecting state. To make these accessible on the
network, the *victoria-metrics-nginx* role now configures `proxy_pass`
directives for them in its nginx configuration.
The *scrape-collectd* role generates the
`/etc/prometheus/scrape-collectd.yml` file. This file can be read by
Prometheus/Victoria Metrics/vmagent to identify the hosts running
*collectd* with the *write_prometheus* plugin, using the
`files_sd_configs` scrape configuration option.
All hosts in the *collectd-prometheus* group are listed as scrape
targets.
*mtrcs0.pyrocufflink.red* is a Raspberry Pi CM4 on a Waveshare
CM4-IO-BASE-B carrier board with a NVMe SSD. It runs a custom OS built
using Buildroot, and is not a member of the *pyrocufflink.blue* AD
domain.
*mtrcs0.p.r* hosts Victoria Metrics/`vmagent`, `vmalert`, AlertManager,
and Grafana. I've created a unique group and playbook for it,
*metricspi*, to manage all these applications together.
The `grafana_ldap_root_ca_cert` can be used to set the path to the root
CA certificate (bundle) Grafana uses to validate the certificate
presented by the configured LDAP server. By default, Grafana uses the
system root CA trust store, but this variable can be used in situations
where this is not suitable.