The *pxe* role configures the TFTP and NBD stages of PXE network
booting. The TFTP server provides the files used for the boot stage,
which may either be a kernel and initramfs, or another bootloader like
SYSLINUX/PXELINUX or GRUB. The NBD server provides the root filesystem,
typically mounted by code in early userspace/initramfs.
The *pxe* role also creates a user group called *pxeadmins*. Users in
this group can publish content via TFTP; they have write-access to the
`/var/lib/tftpboot` directory.
The *tftp* role installs the *tftp-server* package. There is
practically no configuration for the TFTP server. It "just works" out
of the box, as long as its target directory exists.
The *nbd-server* role configures a machine as a Network Block Device
(NDB) server, using the reference `nbd-server` implementation. It
configures a systemd socket unit to listen on the port and accept
incoming connections, and a template service unit for systemd to
instantiate and pass each incoming connection.
The reference `nbd-server` is actually not very good. It does not clean
up closed connections reliably, especially if the client disconnects
unexpectedly. Fortunately, systemd provides the necessary tools to work
around these bugs. Specifically, spawning one process per connection
allows processes to be killed externally. Further, since systemd
creates the listening socket, it can control the keep-alive interval.
By setting this to a rather low value, we can clean up server processes
for disconnected clients more quickly.
Configuration of the server itself is minimal; most of the configuration
is done on a per-export basis using drop-in configuration files. Other
Ansible roles should create these configuration files to configure
application-specific exports. Nothing needs to be reloaded or restarted
for changes to take effect; the next incoming connection will spawn a
new process, which will use the latest configuration file automatically.
The `selinux_permissive` module fails on hosts that do not have SELinux
activated. We must skip running this task on those machines to avoid
fatal errors.
Frigate needs to be able to connect to the MQTT immediately upon start
up or it will crash. Ordering the *frigate.service* unit after
*network-online.target* will help ensure Frigate starts when the system
boots.
The *systemd-resolved* role/playbook ensures the *systemd-resolved*
service is enabled and running, and ensures that the `/etc/resolv.conf`
file is a symlink to the appropriate managed configuration file.
In order for Jenkins to apply configuration policy on machines that are
not members of the *pyrocufflink.blue* domain, it needs to use an SSH
private key for authentication.
The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The firewall hardware is too slow to run the *prometheus_speedtest*
program. It always showed *way* lower speeds than were actually
available. I've moved the service to the Kubernetes cluster and it
works a lot better there.
The *metricspi* hosts several Victoria Metrics-adjacent applications.
These each expose their own HTTP interface that can be used for
debugging or introspecting state. To make these accessible on the
network, the *victoria-metrics-nginx* role now configures `proxy_pass`
directives for them in its nginx configuration.
The *scrape-collectd* role generates the
`/etc/prometheus/scrape-collectd.yml` file. This file can be read by
Prometheus/Victoria Metrics/vmagent to identify the hosts running
*collectd* with the *write_prometheus* plugin, using the
`files_sd_configs` scrape configuration option.
All hosts in the *collectd-prometheus* group are listed as scrape
targets.
*mtrcs0.pyrocufflink.red* is a Raspberry Pi CM4 on a Waveshare
CM4-IO-BASE-B carrier board with a NVMe SSD. It runs a custom OS built
using Buildroot, and is not a member of the *pyrocufflink.blue* AD
domain.
*mtrcs0.p.r* hosts Victoria Metrics/`vmagent`, `vmalert`, AlertManager,
and Grafana. I've created a unique group and playbook for it,
*metricspi*, to manage all these applications together.
The `grafana_ldap_root_ca_cert` can be used to set the path to the root
CA certificate (bundle) Grafana uses to validate the certificate
presented by the configured LDAP server. By default, Grafana uses the
system root CA trust store, but this variable can be used in situations
where this is not suitable.
`vmalert` is a component of Victoria Metrics. It handles alerting and
recording rules, periodically executing queries and dispatching alerts
or writing aggregated data back to the TSDB.
The Prometheus *blackbox_exporter* is a tool that can perform arbitrary,
generic ICMP, TCP, or HTTP "probes" against external services. This is
useful for applications that do not export their own metrics, and for
evaluating the health of protocol-level operations (e.g. TLS
certificate expiration).
The *blackbox-exporter* Ansible role installs and configures the
Blackbox Exporter on the target system. It fetches the specified binary
release from Github and copies it to the remote machine. It also
creates a systemd unit and configures the Blackbox exporter's "modules"
from the `blackbox_modules` Ansible variable.
Some hosts may not need this plugin, or may not have it installed.
Notably, it is not needed or used on my systems based on Buildroot,
since the only current use case for it is to keep track of the Fedora
version.
There are a few minor differences between the way Fedora and Buildroot
package *nginx*:
* Fedora uses a user named *nginx* while buildroot uses *www-data*
* Buildroot uses a Debian-like configuration layout (with
`sites-enabled` and `modules-enabled` directories)
This commit adjusts the *nginx* Ansible role to compensate for these
differences, eschewing Buildroot's configuration layout for the one used
by Fedora/Red Hat.
The *victoria-metrics* role deploys a single-server instance of the
Victoria Metrics time series database server. It installs the selected
version by downloading the binary release from Github and copying it to
`/usr/local/sbin` on the managed node. Scrape configuration is optional
and can be specified with the `scrape_configs` variable.
Tasks that configure the SELinux policy obviously only make sense if the
host uses SELinux. Similarly, if the host does not use FirewallD,
configuring firewall rules doesn't work.
Although the `collectd-version` script is fairly generic and *should*
work for most Linux distributions, it cannot be installed on machines
that a have an immutable root filesystem, e.g. Buildroot-based systems.
For Buildroot-based systems in particular, tracking the OS version makes
very little sense anyway. If we do end up with hosts running an OS
besides either Fedora or Buildroot, we can re-evaluate how to deploy
this feature.
The `/etc/collectd.d` directory is created by the RPM package on
machines running a Red Hat-based Linux distribution, but it may not
always be present on other machines.
In addition to ignoring particular types of filesystems, e.g. OverlayFS,
we can also ignore filesystems by their mount point. This could be
useful, for example, for bind-mounted directories, such as those used on
Kubernetes nodes.
By default, the *df* pluggin for collectd, which monitors filesystem
usage, collects data about all mounted filesystems. It can be
configured to ignore some filesystems, either by mount point, device, or
filesystem type. We will uses this capability to avoid collecting data
about OverlayFS mounts, because by definition, they do not represent a
real filesystem, but one or more other mounted filesystems. Collecting
data about these just creates useless metrics, especially on machines
that run containers.
Setting the `remount_state` variable to `rw` by default will allow the
`remount.yml` playbook to be "chained" with other playbooks, e.g.:
```
ansible-playbook -l kubelet remount.yml collectd.yml -b
```
Some machines, such as the nodes in the Kubernetes cluster, do not use
*firewalld*. For these machines, we need to skip the `firewalld` tasks,
as they will fail. The `host_uses_firewalld` variable can be set to
`False` for these machines to do so.
There is no specific playbook or role for Kubernetes. All OS
configuration is done at install time via kickstart scripts, and
deploying Kubernetes itself is done (manually) using `kubeadm init` and
`kubeadm join`.
It seems with each new release of Fedora, some feature or other of
*collectd* gets broken. In Feodra 36, the *interfaces* plugin does not
seem to work reliably, and the *md* plugin logs a *lot* of errors.
While these issues are investigated upstream, we either need to manage
our own policy for collectd or mark the `collectd_t` domain permissive.
I chose the latter because I'm lazy and I don't consider collectd to be
that big of a threat to security.