Before the `burp` tool gained the `-Q` option, the only way to disable
the progress counter was through the configuration file. Since we do
not want any output from automatic backups (except of course
catastrophic failures), since it would end up being e-mailed by cron,
the progress counter had to be disabled globally. This meant that
on-demand runs on a terminal could not have a progress counter, which
was pretty disappointing.
Now that `burp` has `-Q`, this is no longer the case. Scheduled backups
can run with `-Q`, but ad-hoc runs can omit it to get a progress
counter.
Send logs to the systemd journal for easier viewing and disable logging
to a file. Also, the `samba_dc_log_level` variable can control the log
level (0-10, 0 being off, 10 being insane debugging).
Docker is effectively deprecated by Fedora/Red Hat. It is a pain in the
ass to work with anyway. Podman integrates better with systemd, and is
in general more aligned with how I prefer to deploy and manage
applications.
I am following the same pattern here that I have used for Home
Assistant, ZWaveJS2MQTT, etc. The systemd service starts the container
with `podman`, passing the necessary arguments for UID/GID mapping, etc.
Note that, by default, Vaultwarden expects to be able to bind to port
80; since the container is unprivileged, we have to configure it (or
rather, its embedded HTTP server [Rocket](https://rocket.rs)) to listen
on a different port. We also configure it to listen only on the
loopback, since it is being proxied by Apache to the outside network.
To migrate the data from the Docker volume, we just have to copy the
files and fix their ownership.
The *bitwarden_rs* project was recently renamed to *Vaultwarden*, so I
took this opportunity to update the name in most places within the
*bitwarden_rs* role.
Sometimes I need to configure a machine to be a domain member without
actually adding it to the domain. Now I can by running
`ansible-playbook` with `--skip-tags domain-join`
I honestly don't remember why the `use rfc2307` setting was only enabled
on the first DC. All DCs seem to need this setting in order to use the
UID/GID numbers from the directory, instead of using auto-generated
numbers.
If the remote address configuration for strongSwan is not valid when the
Proton VPN watchdog starts, it will now regenerate it immediately. This
can happen, for example, if the Internet has been down for a while, and
the watchdog has iterated through all of the servers in the cache.
Restarting the service will now force it to reconfigure the tunnel and
bring the VPN back up.
The `collectd-version` script uses the *collectd* UNIX socket to send
custom values to *collectd* to track the OS version. Since these values
obviously cannot change while the system is running, the values are
specified with a very long interval. This avoids having to continuously
insert the values, either with a long-running process or by repeatedly
running a script. The values only need to be inserted once when
*collectd* starts.
All values sent to *collectd* must have an associated type. The type
defines the acceptable range of values. Types are defined in a simple
text file database. *collectd* loads all of the databases specified by
`TypesDB` directives in its configuration file. When configuring a
custom types database, the default database needs to be specified
explicitly; it will not be loaded automatically if there are any
`TypesDB` directives in the configuration.
The *unixsock* plugin for *collectd* provides a socket-based interface
that other software can use to communicate with *collectd*. Notably,
this can be used to publish custom values, query existing values, and
flush caches.
The socket is created at `/run/collectd/socket`. The `/run/collectd`
directory is managed by systemd; it will be created automatically when
the service starts and cleaned up when it stops.
The *collectd-prometheus* role now has a
`collectd_prometheus_allow_outsize` variable. This variable controls
whether or not external hosts are allowed to scrape data from *collectd*.
When set to `false`, as is the default value, *collectd* will be
configured to listen on the loopback interface only, and the TCP port
will not be opened in the firewall.
Synapse supports exporting metrics in Prometheus format. It can do this
either as part of the main server, or in a separate listener. I chose
to use a separate listener so that the metrics are not exposed
publicly.
The *processes* plugin for collectd can be configured to monitor
additional information about specific processes. By specifying one or
more `Process` or `ProcessMatch` directives in the plugin configuration,
collectd will start monitoring the listed processes in detail.
The `collectd_processes` Ansible variable can contain a list of
processes to monitor. Each item must at least have a `name` property,
and may also have a `regex` property. If the latter is present, a
`ProcessMatch` directive will be emitted instead of a `Process`
directive.
The *base* role will now set the password for the *root* user, if the
`root_password_hash` variable is defined. This ensures that there is a
way to log into machines directly, even if other authentication
mechanisms like Active Directory are unavailable.
The *serial-console* Ansible role enables and starts a systemd service
unit to activate a console getty on the specified serial console device
(by default: ttyS0). This is particularly useful for virtual machines,
allowing one to control them in absence of a graphical VM management
tool.
Filesystems like NFS and CIFS require "helper" utilities (i.e.
`mount.nfs` and `mount.cifs`, respectively). These need to be installed
in order for a system to be able to mount those filesystems.
The current shared storage system uses NFSv4, and as such, the
*nfs-utils* package needs to be installed on the VM hosts.
With the transition away from *dhcpcd* on the VM hosts, there is no
longer any need for a custom wait script that must run prior to
attempting to mount the shared filesystem. This dramatically simplifies
the configuration necessary for shared storage.
I don't really see any reason why the shared storage configuration needs
to be managed by a separate role. The *vmhost* role is not really
generic anyway, and will probably not work for any other VM host
deployment besides the two machines running now. As such, I think it
makes sense to move the task to mount the shared filesystem into the
*vmhost* role and drop the *dch-storage-net* role.
The *libvirt-daemon-driver-network* package provides support for
managing virtual networks with libvirt. It is necessary in order to use
managed networks in VM configuration, as opposed to directly specifying
VM network interfaces in their domain configuration.
*systemd-networkd* is (currently) my preferred way to manage network
interfaces on machines running Fedora. The *systemd-networkd* role
provides a generic way to configure network links, devices, and
interfaces, using Ansible variables to generate network unit
configuration files.
The `collectd_df` variable can be used to configure the *df* plugin for
collectd. It should contain a map on key-value pairs that correspond
exactly to the plugin's configuration options.
*nvr0.pyrocufflink.blue* hosts Frigate. It is deployed on a separate
subnet, for two reasons:
* To avoid streaming video from the cameras through the firewall
* To prevent any hosts on the LAN except Home Assistant from
communicating with Frigate, since it does not have any kind of
authentication or access control
Frigate is an NVR that uses machine learning to detect objects on camera
in real time. It integrates with Home Assistant to expose sensors which
can be used for automation, etc.
The only official way to deploy Frigate is with a container, so we use
Podman and systemd to manage it.
The production deployment of *dnsmasq* for Home Assistant has deviated
from how the *hass-dhcp* role configures it. Bringing the role back in
sync with how things really are.
ZwaveJS2Mqtt includes a very powerful web-based UI for configuring and
controlling the Z-Wave network. This functionality is no longer
available within Home Assistant itself, so being able to access the
ZwaveJS2Mqtt UI is crucial to operating the network.
I wanted to make the UI available at */zwave/*, which requires using
*mod_rewrite* to conditionally proxy requests based on the `Connection`
HTTP header, since the UI passes both HTTP and WebSocket requests to the
same paths. *mod_rewrite* configuration is not inherited from the main
server configuration to virtual hosts, so the
`RewriteRule`/`RewriteCond` directives have to be specified within the
`<VirtualHost>` block. This means that the Home Assistant proxy
configuration has to be within its own virtual host, and the
Zwavejs2Mqtt configuration has to be there as well.
*hass2.pyrocufflink.blue* is a Raspberry Pi Compute Module 4-based
system, currently mounted in a WaveShare CM4 Mini Base Board (A). With
an NVMe SSD for primary storage, it runs significantly faster than a
standard Raspberry Pi 4, and blows the old Raspberry Pi 3-based Home
Assistant deployment out of the water. It has a Zooz 700 series Z-Wave
Plus S2 USB stick and a ConBee II Zigbee USB stick attached to its USB
2.0 ports. It runs a customized Fedora Minimal distribution.
Zigbee2MQTT is very similar to ZwaveJS2Mqtt: it is a daemon process that
communicates with the Zigbee radio and integrates with Home Assistant
using MQTT. Naturally, I decided to deploy it in the same way as
ZwaveJS2Mqtt, using a systemd unit to run it in a container with Podman.
Mosquitto 2.x included two significant changes from 1.6:
* There is no longer a "default" listener; all listeners are configured
in the same way
* The daemon drops privileges *before* reading TLS certificates and
private keys
Home Assistant no longer recommends using the built-in libopenzwave
integration for communicating with Z-Wave devices. Evidently, OpenZWave
is no longer maintained, and community efforts have shifted toward
Z-Wave JS.
Z-Wave JS is architecturally much different than the legacy Z-Wave
integration. Instead of running the network controller inside the Home
Assistant process, a separate daemon communicates with the Z-Wave radio.
Home Assistant integrates with that daemon using a WebSockets API. This
has the advantage of decoupling the network operation from the lifecycle
of the Home Assistant process: restarting Home Assistant (e.g. to load
new configuration changes) does not take the Z-Wave network offline.
ZwaveJS2Mqtt is a distribution of the Z-Wave JS daemon, as well as a
web-based user interface for configuring it. Although its name implies
that it uses MQTT for communication, this feature is actually optional,
and the native WebSockets API can still be used for integration with
Home Assistant.
I decided to follow the same deployment pattern for ZwaveJS2Mqtt as for
Home Assistant itself: run the application from a container image using
Podman. This of course simplifies the installation of the application
significantly, leaving most of that work up to the maintainer of the
container image. Podman provides the container runtime, managing the
privileges, etc. The systemd service unit starts Podman, configuring an
ephemeral container on each run. The container uses the default network
namespace, avoiding the unnecessary overhead of port mapping. It uses
Podman's "rootless" mode, via the `--uidmap` and `--gidmap` arguments,
mapping users inside the container, including root, to unprivileged
users on the host. The Z-Wave radio, which is specified by the
`zwavejs_device` Ansible variable, is passed into the container via the
`--device` argument.
Installing Home Assistant in a Python virtualenv is rather tedious,
especially on non-x86 machines. The main issue is Python packages that
include native extensions, as many of these do not have binary wheels
available for aarch64, etc. on PyPI. Thus, to install these, they have
to be built from source, which then requires the appropriate development
packages to be installed. Additionally, compiling native code on a
Raspberry Pi is excruciatingly slow. I have considered various ways of
mitigating this, but all would require a substantial time investment,
both up front and ongoing, making them rather pointless. Eventually, I
settled on just deploying the official Home Assistant container image
with Podman.
Although Podman includes a tool for generating systemd service unit
files for running containers, I ended up creating my own for several
reasons. First and foremost, the generated unit files configure the
containers to run as *root*, but I wanted to run Home Assistant as an
unprivileged user. Unfortunately, I could not seem to get the container
to work when dropping privileges using the `User` directive of the unit.
Fortunately, `podman` has `--uidmap` and `--gidmap` arguments, which I
was able to use to map UID/GID 0 in the container to the *homeassistant*
user on the host. Another drawback of the generated unit files is that
they specify a "forking" type service, which is not really necessary.
Podman/conmon supports the systemd notify protocol, but the generator
has not been updated to make use of that yet.
Recent versions of Home Assistant are more strict with respect to how
reverse proxies are handled. In order to use one, it must be explicitly
listed in the configuration file. Therefore, the *homeassistant*
Ansible role will now create a stub `configuration.yaml`, based on the
one generated by Home Assistant itslf when it starts for the first time
on a new machine, that includes the appropriate configuration for a
reverse proxy running on the same machine. The stub configuration will
not overwrite an existing configuration file, so it is only useful when
deploying Home Assistant for the first time on a new machine.
Overall, although I think a 300+ MB container image is ridiculous,
deploying Home Assistant this way should make it a lot easier to manage,
especially when updating.
Zezere is the Fedora IoT device provisioning service. It is the
software that runs *provision.fedoraproject.org*, but it can be
self-hosted (if you can figure it out; there is no documentation
whatsoever).
The main use case for running Zezere locally is to automatically add
trusted SSH public keys to Fedora IoT devices, without depending on a
cloud service. This playbook sets up Zezere with the very minimal
configuration needed to meet this goal.
This commit introduces the *grafana* role and the corresponding
`grafana.yml` playbook. The role installs Grafana using the system
package manager, and configures the server (including LDAP
authentication).