Transitioning from push-based to pull-based monitoring with
Prometheus/collectd. The *write_prometheus* plugin will be installed on
all hosts, and Prometheus will be configured to scrape them directly.
The *collectd-prometheus* role now has a
`collectd_prometheus_allow_outsize` variable. This variable controls
whether or not external hosts are allowed to scrape data from *collectd*.
When set to `false`, as is the default value, *collectd* will be
configured to listen on the loopback interface only, and the TCP port
will not be opened in the firewall.
Synapse supports exporting metrics in Prometheus format. It can do this
either as part of the main server, or in a separate listener. I chose
to use a separate listener so that the metrics are not exposed
publicly.
Fedora 34 does not include the *ntp* package, as it has been "obsoleted
by ntpsec." Until I can create a role for *ntpsec*,
*dc2.pyrocufflink.blue* cannot be an NTP server.
The *processes* plugin for collectd can be configured to monitor
additional information about specific processes. By specifying one or
more `Process` or `ProcessMatch` directives in the plugin configuration,
collectd will start monitoring the listed processes in detail.
The `collectd_processes` Ansible variable can contain a list of
processes to monitor. Each item must at least have a `name` property,
and may also have a `regex` property. If the latter is present, a
`ProcessMatch` directive will be emitted instead of a `Process`
directive.
Having this option enabled dramatically improves the reliability of
collectd multicast traffic from physical machines and VMs on a separate
VM host from the receiving machine.
1. Set a password for *root* on all machines (useful for logging in via
serial console if network is down)
2. Set an authorized SSH key for root on all machines:
* For Fedora 34, use my FIDO2 security token key
* For all other hosts, use my ED25519 key
The *base* role will now set the password for the *root* user, if the
`root_password_hash` variable is defined. This ensures that there is a
way to log into machines directly, even if other authentication
mechanisms like Active Directory are unavailable.
Occasionally, VMs running on the main libvirt VM hosts will freeze or
otherwise become unavailable via network. Sometimes, when this happens,
their normal consoles are unresponsive as well. Having the serial
console available as a fallback can sometimes be helpful in recovering
from such situations.
To ensure the serial console is available on all VMs, we use a "dynamic"
group, based on the virtualization type and role of the managed node.
All KVM-based virtual machines are included in a group named *kvm-vm*.
A play in `base.yml` applies the *serial-console* role to members of
this group.
The *serial-console* Ansible role enables and starts a systemd service
unit to activate a console getty on the specified serial console device
(by default: ttyS0). This is particularly useful for virtual machines,
allowing one to control them in absence of a graphical VM management
tool.
The `hassdb.yml` playbook is no longer used; the new Home Assistant
deployment uses the built-in database again, since it is stored on NVMe
instead of an SD card.
Further, the current deployment is hosted by a machine with a single
filesystem, which thus cannot be remounted read-only after applying
policy.
Some playbooks apply only to hosts that do not have read-only root
filesystems. For these, the `rw_limit` pattern will be empty. The
*Remount R/W* and *Remount R/O* stages should be skipped when this is
the case.
Filesystems like NFS and CIFS require "helper" utilities (i.e.
`mount.nfs` and `mount.cifs`, respectively). These need to be installed
in order for a system to be able to mount those filesystems.
The current shared storage system uses NFSv4, and as such, the
*nfs-utils* package needs to be installed on the VM hosts.
With the transition away from *dhcpcd* on the VM hosts, there is no
longer any need for a custom wait script that must run prior to
attempting to mount the shared filesystem. This dramatically simplifies
the configuration necessary for shared storage.
I don't really see any reason why the shared storage configuration needs
to be managed by a separate role. The *vmhost* role is not really
generic anyway, and will probably not work for any other VM host
deployment besides the two machines running now. As such, I think it
makes sense to move the task to mount the shared filesystem into the
*vmhost* role and drop the *dch-storage-net* role.
The *libvirt-daemon-driver-network* package provides support for
managing virtual networks with libvirt. It is necessary in order to use
managed networks in VM configuration, as opposed to directly specifying
VM network interfaces in their domain configuration.
*vmhost0.pyrocufflink.blue* no longer uses `dhcpcd` for network
configuration, but *systemd-networkd*.
The host-specific network settings for a VM host include the
configuration for the management interface, as well as the configuration
of the physical ports that make up the bonded interfaces.
Originally, the network configuration for the VM networks and the
storage network was configured using the *netifaces* role. This has
effectively stopped working in recent versions of Fedora, as it sort of
relied on `dhcpcd`, which has not been maintained in Fedora for a while
and no longer behaves correctly. After evaluating *NetworkManager* as a
replacement, I decided that *systemd-networkd* is a more appropriate
solution.
There are effectively two "layers" of network configuration needed for
the VM hosts: the host-specific settings, and the common settings. The
host-specific settings include such properties as the IP address of the
management interface and the names of the physical ports that make up
the bonded interfaces. The common settings are the bonded interfaces,
the VLAN interfaces created on top of the bond, and the bridges that
provide access to VMs.
To configure the host-specific settings, each host simply needs the
appropriate `networkd_*` variables in its `host_vars` file. For the
common settings, we apply the *systemd-networkd* role again in the
`vmhost.yml` with different values for these variables. Thus,
effectively, `systemd-networkd.yml` manages the host-specific settings,
while `vmhost.yml` manages the common settings.
*systemd-networkd* is (currently) my preferred way to manage network
interfaces on machines running Fedora. The *systemd-networkd* role
provides a generic way to configure network links, devices, and
interfaces, using Ansible variables to generate network unit
configuration files.
*hass1.pyrocufflink.blue* and *hassdb0.pyrocufflink.blue* were part of
the old Home Assistant deployment. Everything has been migrated to
*hass2.pyrocufflink.blue*, so these machines can be decommissioned now.
I couldn't get RTMP to work on the Back Yard camera because the `ffmpeg`
process kept crashing:
```
ffmpeg.back_yard.clips_rtmp ERROR : av_interleaved_write_frame(): Connection reset by peer
ffmpeg.back_yard.clips_rtmp ERROR : [flv @ 0x5562090c8ec0] Failed to update header with correct duration.
ffmpeg.back_yard.clips_rtmp ERROR : [flv @ 0x5562090c8ec0] Failed to update header with correct filesize.
ffmpeg.back_yard.clips_rtmp ERROR : Error writing trailer of rtmp://127.0.0.1/live/back_yard: Connection reset by peer
watchdog.back_yard INFO : Terminating the existing ffmpeg process...
watchdog.back_yard INFO : Waiting for ffmpeg to exit gracefully...
```
I thought increasing the value of `--shm-size` argument for `podman`
would help, but even going as high as 1024 mebibytes did not resolve the
problem.
Ultimately, I decided that it is not really necessary to view the full
4k stream in real time. The back yard camera supports three streams, so
I set them all up for different roles. I briefly considered using a
single 1080p stream for both object detection and RTMP streaming, but
this consumed considerable CPU time, so I decided against it for now. I
may re-evaluate that option if I decide to purchase a TPU.
The `collectd_df` variable can be used to configure the *df* plugin for
collectd. It should contain a map on key-value pairs that correspond
exactly to the plugin's configuration options.
*nvr0.pyrocufflink.blue* hosts Frigate. It is deployed on a separate
subnet, for two reasons:
* To avoid streaming video from the cameras through the firewall
* To prevent any hosts on the LAN except Home Assistant from
communicating with Frigate, since it does not have any kind of
authentication or access control
Frigate is an NVR that uses machine learning to detect objects on camera
in real time. It integrates with Home Assistant to expose sensors which
can be used for automation, etc.
The only official way to deploy Frigate is with a container, so we use
Podman and systemd to manage it.
For hosts that cannot send metrics via multicast (e.g. because they are
on a different subnet), *collectd* needs to listen on the all-hosts
unicast address.
The VM hosts have multiple network interfaces with IPv6 addresses, so
collectd may not always choose the correct one to send metrics. Thus we
have to explicitly tell it to use the management interface, to avoid it
sending data on the SAN interface.