The `deploy.sh` script ensures the execution environment is correct by
configuring the Ansible Vault secret, unlocking the `rbw` vault, and
requesting an SSH client certificate. It then runs the specified
end-to-end deployment script from the `deploy` directory.
*db0.pyrocufflink.blue* will be the primary server in the new PostgreSQL
database cluster. We're starting with Fedora 39 so we can have
PostgreSQL 15, to match the version managed by the Postgres Operator in
the Kubernetes cluster right now.
I've actually had this playbook for a _long_ time, just never bothered
to commit it. It's useful for the very first time Ansible is run for a
managed node to configure all the basic stuff.
For the longest time, whenever I needed to create a new virtual machine,
I just used `Ctrl+R` to find the last `virt-install` command I had run
and tweaked it for the new machine. At some point, my `~/.zsh_history`
overflowed, though, so the command I had been running got lost. To
avoid this silliness in the future, I've created a script that runs
`virt-manager` for me. As a bonus, it has some configuration flags for
the parameters that often vary between machines. For most machines,
though, the script can be run as simply as `newvm.sh name`.
I am going to use the *postgresql* group for the dedicated database
servers. The configuration for those machines will be quite a bit
different than for the one existing machine that is a member of that
group already: the Nextcloud server. Rather than undefine/override all
the group-level settings at the host level, I have removed the Nextcloud
server from the *postgresql* group, and updated the `nextcloud.yml`
playbook to apply the *postgresql-server* role itself.
Eventually, I want to move the Nextcloud database to the central
database servers. At that point, I will remove the *postgresql-server*
role from the `nextcloud.yml` playbook.
The `datavol.yml` playbook can provision one or more data volumes on
a managed node, using the definitions in the `data_volumes` variable.
This variable must contain a list of dictionaries with the following
keys:
* `dev`: The block device where the data volume is stored (e.g.
`/dev/vdb`)
* `fstype`: The type of filesystem to create on the block device
* `mountpoint`: The location in the filesystem hierarchy where the
volume is mounted
* `opts`: (Optional) options to pass to the `mkfs` program when
formatting the device
* `mountopts`: (Optional) additional options to pass to the `mount`
program when mounting the filesystem
This role installs `wal-g` from the DCH Yum repository, and creates a
configuration file for it in `/etc/postgresql`. Additionally, it
installs a custom SELinux policy module that allows `wal-g` to run in
the `postgresql_t` domain (i.e. when spawned by the PostgreSQL server).
This role can be used to get a server certificate for PostgreSQL from an
ACME CA using `certbot`. It fetches the initial certificate and copies
it to the PostgreSQL configuration directory. It also sets up a
post-renewal hook script that copies updated certificates and reload
the server.
This rewrite brings a lot of improvements and new functionality to the
*postgresql-server* role. The most noticeable change is the
introduction of the `postgresql_config_dir` variable, which can be used
to specify a different location for the PostgreSQL server configuration
files, separate from the data storage directory. By default, the
variable is set to `/etc/postgresql`. For some situations, it may be
necessary to disable this functionality, which can be accomplished by
setting the value of `postgresql_config_dir` to the same path as
`pgdata_dir`. Note also that the `postgresql-setup` tool, and the
corresponding `postgresql-check-db-dir` script, which are included in
the Fedora/Red Hat distribution of PostgreSQL, do not support having
separate configuration and data directories, so their use has to be
disabled.
Another significant improvement is to how the `postgresql.conf` file
is generated. Any setting can be set now, using the `postgresql_config`
variable; any key in this dictionary will be written to the
configuration file. Note that configuration file syntax requires
single quotes around string values, so these have to be included in the
YAML value.
To support deploying standby servers, the role now supports running a
command to restore from a backup instead of running `initdb`.
Additionally, the `postgresql_standby` variable can be set to `true`
to create the `recovery.signal` file, configuring the server as a
standby.
Since the `samba-dc.yml` playbook executes on a single host at a time,
if the fact cache is not current, only the facts for the current host
will be available. This prevents some tasks, especially the
configuration of the trusted SSH host keys for `sysvolsync`, to have
incorrect data. To avoid this, we need to explicitly gather facts for
all of the domain controllers before starting to configure any of them.
Sending SIGHUP to the main PID (i.e. conmon) ends up stopping the
service. What we really want is to send the signal to main PID _inside_
the container. We can achieve this by using `podman kill` instead of
`kill`.
*k8s-amd64-n0.pyrocufflink.blue*, *k8s-amd64-n1.pyrocufflink.blue*, and
*k8s-amd64-n2.pyrocufflink.blue*, which ran Fedora Linux, have been
replaced by *k8s-amd64-n4.pyrocufflink.blue*,
*k8s-amd64-n5.pyrocufflink.blue*, and *k8s-amd64-n6.pyrocufflink.blue*,
respectively. The new machines run Fedora CoreOS, and are thus not
managed by the Ansible configuration policy.
To improve the performance of persistent volumes accessed directly from
the Synology by Kubernetes pods, I've decided to expose the storage
network to the Kubernetes worker node VMs. This way, iSCSI traffic does
not have to go through the firewall.
I chose not to use the physical interfaces that are already directly
connected to the storage network for this for two reasons: 1) I like
the physical separation of concerns and 2) it would add complexity to
the setup by introducing a bridge on top of the existing bond.
Nextcloud usually (always?) wants the `occ upgrade` command to be run
after an update. If the *nextcloud* package gets updated along with
the rest of the OS, Nextcloud will be down until I manually run that
command hours/days later.
Without making the firewall changes permanent, when a server tries to
renew its certificate after rebooting, it will fail as the ACME server
cannot connect to the HTTP port.
Files in the Nextcloud trash bin do not need to be backed up. They are
often large (i.e. my Signal backups), and presumably, they are not
needed anyway; why would they be in the trash otherwise?
Sometimes, the `collectd-version` script crashes or fails to start at
boot. Configuring systemd to automatically restart it will ensure that
it's always running, so machines' versions are consistently inventoried.
New AD DC servers run Fedora 40. Their LDAP server certificates are
issued by *step-ca* via ACME, signed by *dch-ca r2*.
I've changed the naming convention for domain controllers again. I
found the random sequenc of characters to be too difficult to remember
and identify. Using a short random word (chosen from the EFF word list
used by Diceware) should be a lot nicer. These names are chosen by the
`create-dc.sh` script.
Since I don't like to update Samba Active Directory Domain Controller
servers in-place (it's never worked as well as you would think it
should), I want the process for replacing them to be as automated as
possible. To that end, I've written `create-dc.sh`, which handles the
whole process of creating and configuring a new ADDC VM. The only
things it doesn't do are transfer the FSMO roles and demote existing DC
servers.
Installing Fedora on a bunch of machines, simultaneously or in rapid
succession, can be painfully slow, as several large files need to be
downloaded. To speed this up, we download those files via the proxy and
cache them on the proxy server.
As a side-effect, the proxy needs to allow access to the Kickstart
"server" (i.e. my workstation, at least for now), since Anaconda will
use the configured proxy for everything it downloads.
*unifi2.pyrocufflink.blue*, which is connected to the management
network, can only access the Internet via the proxy. In order for
Zincati/`rpm-ostree` to automatically update the machine, the proxy
needs to allow access to the FCOS update servers.
The `squid.service` systemd unit now correctly initializes the
configured cache directories, so we do not need to do it explicitly
before starting the server.
The *samba-cert* role configures `lego` and HAProxy to obtain an X.509
certificate via the ACME HTTP-01 challenge. HAProxy is necessary
because LDAP server certificates need to have the apex domain in their
SAN field, and the ACME server may contact *any* domain controller
server with an A record for that name. HAProxy will forward the
challenge request on to the first available host on port 5000, where
`lego` is listening to provide validation.
Issuing certificates this way has a couple of advantages:
1. No need for the wildcard certificate for the *pyrocufflink.blue*
domain any more
2. Renewals are automatic and handled by the server itself rather than
Ansible via scheduled Jenkins job
Item (2) is particularly interesting because it avoids the bi-monthly
issue where replacing the LDAP server certificate and restarting Samba
causes the Jenkins job to fail.
Naturally, for this to work correctly, all LDAP client applications
need to trust the certificates issued by the ACME server, in this case
*DCH Root CA R2*.
HAProxy uses a special configuration block, `resolvers`, to specify
how it should look up names in DNS. This configuration is used for
e.g. dynamically discovering backend servers via DNS A or SRV records.
Since resolvers are global, they need to be specified in the global
configuration file, rather than a per-application drop-in.
We will use this functionality for the ACME HTTP-01 challenge solver
for Samba AD domain controllers.
The current version of *haproxy* packaged in Fedora already enables
configuration via fragments in a drop-in directory, though it uses
a different path by default. I still like separating the global
configuration from the defaults, though, and keeping the main
`haproxy.cfg` file empty.
*dnf-automatic* is an add-on for `dnf` that performs scheduled,
automatic updates. It works pretty much how I would want it to:
triggered by a systemd timer, sends email reports upon completion, and
only reboots for kernel et al. updates.
In its default configuration, `dnf-automatic.timer` fires every day. I
want machines to update weekly, but I want them to update on different
days (so as to avoid issues if all the machines reboot at once). Thus,
the _dnf-automatic_ role uses a systemd unit extension to change the
schedule. The day-of-the-week is chosen pseudo-randomly based on the
host name of the managed system.
Even with `Network=host`, Podman tries to write to
`/etc/containers/network` for some reason. Fortunately, it doesn't
actually need to, so we can trick it into working by mounting an empty
*tmpfs* filesystem there.
Even with `Network=host`, Podman tries to write to
`/etc/containers/network` for some reason. Fortunately, it doesn't
actually need to, so we can trick it into working by mounting an empty
*tmpfs* filesystem there.
The summer 2024 enrollment form is more complicated than the other
forms on the HLC site, as it integrates directly with Invoice Ninja. As
such, it's handled by a different backend, which runs in Kubernetes.
The BIND server on the firewall is configured to write query logs and
RPZ rewrite logs to files under `/var/log/named`. We can scrape these
logs with Promtail and use the messages for analytics on the DNS-based
firewall, etc.
The UniFi controller can act as a syslog server, receiving log messages
from managed devices and writing them to files in the `logs/remote`
directory under the application data directory. We can scrape these
logs, in addition to the logs created by the UniFi server itself, with
Promtail to get more information about what's happening on the network.
The *promtail* service runs as an unprivileged user by default, which is
fine in most cases (i.e. when scraping only the Journal), but may not
always be sufficient to read logs from other files. Rather than run
Promtail as root in these cases, we can assign it the
CAP_DAC_READ_SEARCH capability, which will allow it to read any file,
but does not grant it any of root's other privileges.
To enable this functionality, the `promtail_dac_read_search` Ansible
variable can be set to `true` for a host or group. This will create a
systemd unit configuration extension that configures the service to have
the CAP_DAC_READ_SEARCH capability in its ambient set.
*unifi1.pyrocufflink.blue* requires a proxy to access Yum repositories
on the Internet, so it has the `proxy` setting configured globally. The
proxy does NOT allow access to internal resources, however. The
internal repository is directly accessible by that machine, so it needs
to be configured thus.
The Squid "cache log" is where it writes general debug and error
messages. It is distinct from the "access log," which is where it
writes the status of every proxy request. We already had the latter
configured to go to syslog by default (so it would be captured in the
journal), but missed the former.