Promtail is the log sending client for Grafana Loki. For traditional
Linux systems, an RPM package is available from upstream, making
installation fairly simple. Configuration is stored in a YAML file, so
again, it's straightforward to configure via Ansible variables. Really,
the only interesting step is adding the _promtail_ user, which is
created by the RPM package, to the _systemd-journal_ group, so that
Promtail can read the systemd journal files.
The *dustinandtabitha.com* website no longer uses *formsubmit* (the time
for RSVP has **long** passed). Removing the configuration so the
file encrypted with Ansible Vault can go away.
A few hosts have `AuthorizedKeysCommand` set in their *sshd(8)*
configuration. This was my first attempt at centrally managing SSH
keys, using a script which fetched a list of keys for each user from an
HTTP server. This worked most of the time, but I didn't take good care
of the HTTP server, so the script would fail frequently. Now that all
hosts trust the SSH user CA, there is no longer any need for this
"feature."
The `TrustedUserCAKeys` setting for *sshd(8)* tells the server to accept
any certificates signed by keys listed in the specified file.
The authenticating username has to match one of the principals listed in
the certificate, of course.
This role is applied to all machines, via the `base.yml` playbook.
Certificates issued by the user CA managed by SSHCA will therefore be
trusted everywhere. This brings us one step closer to eliminating the
dependency on Active Directory/Samba.
Normal users do not need shell access to the file server, and certainly
should not be allowed to e.g. forward ports through it. Using a `Match`
block, we can apply restrictions to users who do not need administrative
functionality. In this case, we restrict everyone who is not a member
of the *Server Admins* group in the PYROCUFFLINK AD domain.
The [pam_ssh_agent_auth][0] PAM module authenticates users using keys in
their SSH agent. Using SSH agent forwarding, it can even authenticate
users with keys on a remote system. By adding it to the PAM stack for
`sudo`, we can configure the latter to authenticate users without
requiring a password. For servers especially, this is significantly
more secure than configuring `sudo` not to require a password, while
still being almost as convenient.
For this to work, users need to enable SSH agent forwarding on their
clients, and their public keys have to be listed in the
`/etc/security/sudo.authorized_keys` file. Additionally, although the
documentation suggests otherwise, the `SSH_AUTH_SOCK` environment
variable has to be added to the `env_keep` list in *sudoers(5)*.
[0]: https://github.com/jbeverly/pam_ssh_agent_auth
Running `squid -z` as *root* leaves behind temporary files in
`/dev/shm`. When *squid.service* starts squid, in the proper SELinux
domain, it is unable to access these files and crashes. To avoid this,
we mount a private *tmpfs* so no existing files are accessible in the
service's namespace.
Instead of hard-coding a single cache directory and a set of refresh
patterns, the *squid* role can now have custom cache rules defined with
the `squid_cache_dir` variable (which now takes a list of `cache_dir`
settings) and the `squid_refresh_pattern` variable (which takes a list
of refresh patterns). If neither of these are defined, the default
configuration is used.
This is a breaking change, since `squid_cache_dir` used to refer to a
directory, and the previous default was to configure one cache path.
There are no extant users of this role, though, so it doesn't really
matter.
The default set of access control lists and access rules for Squid are
fine for allowing hosts on the local network access to the web in
general. For other uses, such as web filtering, etc. more complex rules
may be needed. To that end, the *squid* role now supports some
additional variables. Notably, `squid_acl` contains a map of ACL names
to list entries and `squid_http_access` contains a list of access rules.
If these are set, their corresponding defaults are not included in the
rendered configuration file.
Recent versions of NUT have a *nut.target* unit that collects all of the
NUT-related services. Enabling any of the services individually does
effectively nothing, as it only adds the service as a `Wants` dependency
for *nut.target*, and that unit already has dependencies for all of
them. Thus, in order for the service to start at boot, *nut.target* has
to be enabled instead.
In situations where only *nut-monitor* should be enabled, enabling
*nut.target* is inappropriate, since that enables *nut-driver* and
*nut-server* as well. It's not clear why upstream made this change (it
was part of a [HUGE pull request][0]), but restoring the desired
behavior is easy enough by clearing the dependencies from *nut.target*.
Services that we want to start automatically can still be enabled
individually, and will start as long as *nut.target* is enabled.
[0]: https://github.com/networkupstools/nut/pull/330
The Synapse server can sometimes take a very long time to start.
Increasing the start timeout should keep it from failing to come up when
the machine is under load.
The UniFi controller service can sometimes take a really long time to
start up. This most frequently happens after a full outage, when the VM
hosts are very busy bringing everything up.
The `vm-autostart` script fails with `bad system call` errors when
trying to start libvirt domains. Removing the system call filters works
around this. Ideally, we should figure out exactly which system call is
being rejected and allow it, but that's rather difficult to do and
probably not really worth the effort in this case.
The MinIO service often fails to start from a cold boot. Delaying
starting the service until the network is online, plus increasing the
startup timeout, should help with this. If not, enabling auto restart
will let systemd try to start the service again if it still fails to
come up on time.
`upsmon` is the component of [NUT] that monitors (local or remote) UPS
devices and reacts to changes in their state. Notably, it is
responsible for powering down the system when there is insufficient
power to the system.
I've moved the Dark Chest of Wonders website to run in a container on
Kubernetes. This will keep it from breaking every time the OS is
updated on the web server, when the version of Python in Fedora changes.
Recent(-ish) versions of Fedora have a drop-in configuration directory
for `sshd`. This allows applications, etc. to define certain settings
for the SSH server, without having to manage the entire server
configuration. For Gitea specifically, we only need to set a few
settings for the *gitea* user, leaving the remaining settings alone.
This commit does not include any migration to undo the settings that
were originally set, but that should be as simple as `mv
/etc/ssh/sshd_config.rpmnew /etc/ssh/sshd_config && systemctl reload
sshd`.
The *ssh-host-certs* role, which is now applied as part of the
`base.yml` playbook and therefore applies to all managed nodes, is
responsible for installing the *sshca-cli* package and using it to
request signed SSH host certificates. The *sshca-cli-systemd*
sub-package includes systemd units that automate the process of
requesting and renewing host certificates. These units need to be
enabled and provided the URL of the SSHCA service. Additionally, the
SSH daemon needs to be configured to load the host certificates.
The *dch* repository, hosted on *file0.pyrocufflink.blue* and managed by
the *repohost* Ansible role, is where I plan to host RPM packages for
internal use (e.g. *sshca-cli*, *dch-selinux*, etc.). The *dch-yum*
role configures Yum/dnf to use this repository. Roles that need to
install a package from here will list this role as a dependency.
So it turns out Gitea's RPM package repository feature is less than
stellar. Since each organization/user can only have a single
repository, separating packages by OS would be extremely cumbersome.
Presumably, the feature was designed for projects that only build a
single PRM for each version, but most of my packages need multiple
builds, as they tend to link to system libraries. Further, only the
repository owner can publish to user-scoped repositories, so e.g.
Jenkins cannot publish anything to a repository under my *dustin*
account. This means I would ultimately have to create an Organization
for every OS/version I need to support, and make Jenkins a member of it.
That sounds tedious and annoying, so I decided against using that
feature for internal packages.
Instead, I decided to return to the old ways, publishing packages with
`rsync` and serving them with Apache. It's fairly straightforward to
set this up: just need a directory with the appropriate permissions for
users to upload packages, and configure Apache to serve from it.
One advantage Gitea's feature had over a plain directory is its
automatic management of repository metadata. Publishers only have to
upload the RPMs they want to serve, and Gitea handles generating the
index, database, etc. files necessary to make the packages available to
Yum/dnf. With a plain file host, the publisher would need to use
`createrepo` to generate the repository metadata and upload that as
well. For repositories with multiple packages, the publisher would need
a copy of every RPM file locally in order for them to be included in the
repository metadata. This, too, seems like it would be too much trouble
to be tenable, so I created a simple automatic metadata manager for the
file-based repo host. Using `inotifywatch`, the `repohost-createrepo`
script watches for file modifications in the repository base directory.
Whenever a file is added or changed, the directory containing it is
added to a queue. Every thirty seconds, the queue is processed; for
each unique directory in the queue, repository metadata are generated.
This implementation combines the flexibility of a plain file host,
supporting an effectively unlimited number of repositories with
fully-configurable permissions, and the ease of publishing of a simple
file upload.
The `net cache flush` command does not seem to always work to clear the
identity mapping cache used by winbind. Explicitly moving the file
does, though.
On a new DC, the `idmap.ldb` file does not yet exist the first time
`sysvolsync` runs. This causes a syntax error in the condition that
checks the modification timestamp of the file.
Forcing the PDC lookup to use localhost as the DNS server does not work
when first adding a new domain controller, as the `sysvolsync` script
runs before Samba starts. There isn't much advantage to using the local
DNS server over the system-defined server anyway.
The `winbind offline login` setting seems to cause issues when one of
the domain controllers is offline. Rather than try the other DC,
winbind seems to just "give up" and return NT_STATUS_NO_SUCH_USER for
all authentication requests until the offline cache is flushed. There's
not really any reason to use this setting on servers anyway, since they
are always connected to the LAN, as opposed to laptops that may
occasionally disconnect. Let's disable this option in the hopes that it
makes logins more resilient to DC downtime. After all, there's not much
point in having multiple DCs if they all have to be available in order
to log in.
AWS is going to begin charging extra for routable IPv4 addresses soon.
There's really no point in having a relay in the cloud anymore anyway,
since a) all outbound messages are sent via the local relay and b) no
messages are sent to anyone except me.
The *authconfig* package has been gone from Fedora since ages. There's
no reason to have this no-op step any more, especially since it has the
side-effect of making a network request to refresh the dnf cache.
*nvr1.pyrocufflink.blue* has been migrated to Fedora CoreOS. As such,
it is no longer managed by Ansible; its configuration is done via
Butane/Ignition. It is no longer a member of the Active Directory
domain, but it does still run *collectd* and export Prometheus metrics.
The `scrape_collectd_extra_targets` variable can be used to specify a
list of additional targets to scrape, in addition to the hosts in the
*collectd-prometheus* group. This will allow us to scrape hosts that
are not managed by the configuration policy, but still expose Prometheus
metrics via collectd.
MinIO is supposed to automatically reload itself when the certificate
changes, but this does not appear to happen in all cases. To ensure the
updated certificate gets used, we need to send SIGHUP to the MinIO
server process.
Since Jellyfin is running on the file server, which also hosts a few
other websites that do not define virtual hosts, the HTTP-to-HTTPS
redirect was applied to *all* requests. To avoid this, we simply add a
rewrite condition so that the redirect only applies to requests for
Jellyfin.
Jellyfin is a multimedia library manager. Clients can browse and stream
music, movies, and TV shows from the server and play them locally
(including in the browser).
Since Ubiquiti only publishes Debian packages for the Unifi Network
controller software, running it on Fedora has historically been neigh
impossible. Fortunately, a modern solution is available: containers.
The *linuxserver.io* project publishes a container image for the
controller software, making it fairly easy to deploy on any host with an
OCI runtime. I briefly considered creating my own image, since theirs
must be run as root, but I decided the maintenance burden would not be
worth it. Using Podman's user namespace functionality, I was able to
work around this requirement anyway.
When the Home Assistant container restarts, Podman relabels the entire
`/var/lib/homeassistant` directory as `container_file_t`. Since the
*homeassistant* user's home directory is `/var/lib/homeassistant`, its
`~/.ssh` directory is thus also relabeled, preventing the SSH daemon
from accessing it. Since Home Assistant itself does not need access to
this path, we can tell systemd to mount an empty tmpfs filesystem there
in the service unit's mount namespace. This way, when Podman relabels
the directory, it will change the label of the tmpfs mount point instead
of the actual directory.
Most of the Synapse server's state is in its SQLite database. It also
has a `media_store` directory that needs to be backed up, though.
In order to back up the SQLite database while the server is running, the
database must be in "WAL mode." By default, Synapse leaves the database
in the default "rollback journal mode," which disallows multiple
processes from accessing the database, even for read-only operations.
To change the journal mode:
```sh
sudo systemctl stop synapse
sudo -u synapse sqlite3 /var/lib/synapse/homeserver.db 'PRAGMA journal_mode=WAL;'
sudo systemctl start synapse
```