In order for Jenkins to apply configuration policy on machines that are
not members of the *pyrocufflink.blue* domain, it needs to use an SSH
private key for authentication.
The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The firewall hardware is too slow to run the *prometheus_speedtest*
program. It always showed *way* lower speeds than were actually
available. I've moved the service to the Kubernetes cluster and it
works a lot better there.
The *metricspi* hosts several Victoria Metrics-adjacent applications.
These each expose their own HTTP interface that can be used for
debugging or introspecting state. To make these accessible on the
network, the *victoria-metrics-nginx* role now configures `proxy_pass`
directives for them in its nginx configuration.
The *scrape-collectd* role generates the
`/etc/prometheus/scrape-collectd.yml` file. This file can be read by
Prometheus/Victoria Metrics/vmagent to identify the hosts running
*collectd* with the *write_prometheus* plugin, using the
`files_sd_configs` scrape configuration option.
All hosts in the *collectd-prometheus* group are listed as scrape
targets.
*mtrcs0.pyrocufflink.red* is a Raspberry Pi CM4 on a Waveshare
CM4-IO-BASE-B carrier board with a NVMe SSD. It runs a custom OS built
using Buildroot, and is not a member of the *pyrocufflink.blue* AD
domain.
*mtrcs0.p.r* hosts Victoria Metrics/`vmagent`, `vmalert`, AlertManager,
and Grafana. I've created a unique group and playbook for it,
*metricspi*, to manage all these applications together.
The `grafana_ldap_root_ca_cert` can be used to set the path to the root
CA certificate (bundle) Grafana uses to validate the certificate
presented by the configured LDAP server. By default, Grafana uses the
system root CA trust store, but this variable can be used in situations
where this is not suitable.
`vmalert` is a component of Victoria Metrics. It handles alerting and
recording rules, periodically executing queries and dispatching alerts
or writing aggregated data back to the TSDB.
The Prometheus *blackbox_exporter* is a tool that can perform arbitrary,
generic ICMP, TCP, or HTTP "probes" against external services. This is
useful for applications that do not export their own metrics, and for
evaluating the health of protocol-level operations (e.g. TLS
certificate expiration).
The *blackbox-exporter* Ansible role installs and configures the
Blackbox Exporter on the target system. It fetches the specified binary
release from Github and copies it to the remote machine. It also
creates a systemd unit and configures the Blackbox exporter's "modules"
from the `blackbox_modules` Ansible variable.
Some hosts may not need this plugin, or may not have it installed.
Notably, it is not needed or used on my systems based on Buildroot,
since the only current use case for it is to keep track of the Fedora
version.
There are a few minor differences between the way Fedora and Buildroot
package *nginx*:
* Fedora uses a user named *nginx* while buildroot uses *www-data*
* Buildroot uses a Debian-like configuration layout (with
`sites-enabled` and `modules-enabled` directories)
This commit adjusts the *nginx* Ansible role to compensate for these
differences, eschewing Buildroot's configuration layout for the one used
by Fedora/Red Hat.
The *victoria-metrics* role deploys a single-server instance of the
Victoria Metrics time series database server. It installs the selected
version by downloading the binary release from Github and copying it to
`/usr/local/sbin` on the managed node. Scrape configuration is optional
and can be specified with the `scrape_configs` variable.
Tasks that configure the SELinux policy obviously only make sense if the
host uses SELinux. Similarly, if the host does not use FirewallD,
configuring firewall rules doesn't work.
Although the `collectd-version` script is fairly generic and *should*
work for most Linux distributions, it cannot be installed on machines
that a have an immutable root filesystem, e.g. Buildroot-based systems.
For Buildroot-based systems in particular, tracking the OS version makes
very little sense anyway. If we do end up with hosts running an OS
besides either Fedora or Buildroot, we can re-evaluate how to deploy
this feature.
The `/etc/collectd.d` directory is created by the RPM package on
machines running a Red Hat-based Linux distribution, but it may not
always be present on other machines.
In addition to ignoring particular types of filesystems, e.g. OverlayFS,
we can also ignore filesystems by their mount point. This could be
useful, for example, for bind-mounted directories, such as those used on
Kubernetes nodes.
By default, the *df* pluggin for collectd, which monitors filesystem
usage, collects data about all mounted filesystems. It can be
configured to ignore some filesystems, either by mount point, device, or
filesystem type. We will uses this capability to avoid collecting data
about OverlayFS mounts, because by definition, they do not represent a
real filesystem, but one or more other mounted filesystems. Collecting
data about these just creates useless metrics, especially on machines
that run containers.
Setting the `remount_state` variable to `rw` by default will allow the
`remount.yml` playbook to be "chained" with other playbooks, e.g.:
```
ansible-playbook -l kubelet remount.yml collectd.yml -b
```
Some machines, such as the nodes in the Kubernetes cluster, do not use
*firewalld*. For these machines, we need to skip the `firewalld` tasks,
as they will fail. The `host_uses_firewalld` variable can be set to
`False` for these machines to do so.
There is no specific playbook or role for Kubernetes. All OS
configuration is done at install time via kickstart scripts, and
deploying Kubernetes itself is done (manually) using `kubeadm init` and
`kubeadm join`.
It seems with each new release of Fedora, some feature or other of
*collectd* gets broken. In Feodra 36, the *interfaces* plugin does not
seem to work reliably, and the *md* plugin logs a *lot* of errors.
While these issues are investigated upstream, we either need to manage
our own policy for collectd or mark the `collectd_t` domain permissive.
I chose the latter because I'm lazy and I don't consider collectd to be
that big of a threat to security.
*nvr1.pyrocufflink.blue* is the new video recording server. It is a
1U rack-mounted physical machine based on the [Jetway
JBC150F596-3160-B][0] barebone system. It replaces
*nvr0.pyrocufflink.blue* in this role.
[0]: https://www.jetwaycomputer.com/JBC150F596.html
Podman 4 puts lock files in the configuration directory for [some stupid
reason][0]. There are so many issues here!
* It is now impossible to run `podman` as root with a read-only `/etc`.
* Why does it need the lock file at all when using `--network=host`?
Luckily, we can work around it fairly easily by mounting a tmpfs
filesystem over the directory it wants to put the lock file in. This
pretty much defeats the purpose of having a lock file, but it's likely
not needed anyway.
[0]: 836fa4c493
The *sensors* plugin for collectd reads temperature information from the
I²C/SMBus using *lm_sensors*. Naturally, it is only useful on physical
machines, so it is not installed or enabled by default.
Instead of a simple list of disabled plugins, hosts and host groups can
now control whether plugins are enabled or disabled using the
`collectd_plugins` map. The map keys are plugin names, and the values
are booleans indicating if the plugin is enabled.
Using this mechanism, some plugins can be disabled by default (e.g. the
*md* plugin), and enabling them per host or per host group is simpler.