The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The *metricspi* hosts several Victoria Metrics-adjacent applications.
These each expose their own HTTP interface that can be used for
debugging or introspecting state. To make these accessible on the
network, the *victoria-metrics-nginx* role now configures `proxy_pass`
directives for them in its nginx configuration.
The *scrape-collectd* role generates the
`/etc/prometheus/scrape-collectd.yml` file. This file can be read by
Prometheus/Victoria Metrics/vmagent to identify the hosts running
*collectd* with the *write_prometheus* plugin, using the
`files_sd_configs` scrape configuration option.
All hosts in the *collectd-prometheus* group are listed as scrape
targets.
*mtrcs0.pyrocufflink.red* is a Raspberry Pi CM4 on a Waveshare
CM4-IO-BASE-B carrier board with a NVMe SSD. It runs a custom OS built
using Buildroot, and is not a member of the *pyrocufflink.blue* AD
domain.
*mtrcs0.p.r* hosts Victoria Metrics/`vmagent`, `vmalert`, AlertManager,
and Grafana. I've created a unique group and playbook for it,
*metricspi*, to manage all these applications together.
The `grafana_ldap_root_ca_cert` can be used to set the path to the root
CA certificate (bundle) Grafana uses to validate the certificate
presented by the configured LDAP server. By default, Grafana uses the
system root CA trust store, but this variable can be used in situations
where this is not suitable.
`vmalert` is a component of Victoria Metrics. It handles alerting and
recording rules, periodically executing queries and dispatching alerts
or writing aggregated data back to the TSDB.
The Prometheus *blackbox_exporter* is a tool that can perform arbitrary,
generic ICMP, TCP, or HTTP "probes" against external services. This is
useful for applications that do not export their own metrics, and for
evaluating the health of protocol-level operations (e.g. TLS
certificate expiration).
The *blackbox-exporter* Ansible role installs and configures the
Blackbox Exporter on the target system. It fetches the specified binary
release from Github and copies it to the remote machine. It also
creates a systemd unit and configures the Blackbox exporter's "modules"
from the `blackbox_modules` Ansible variable.
Some hosts may not need this plugin, or may not have it installed.
Notably, it is not needed or used on my systems based on Buildroot,
since the only current use case for it is to keep track of the Fedora
version.
There are a few minor differences between the way Fedora and Buildroot
package *nginx*:
* Fedora uses a user named *nginx* while buildroot uses *www-data*
* Buildroot uses a Debian-like configuration layout (with
`sites-enabled` and `modules-enabled` directories)
This commit adjusts the *nginx* Ansible role to compensate for these
differences, eschewing Buildroot's configuration layout for the one used
by Fedora/Red Hat.
The *victoria-metrics* role deploys a single-server instance of the
Victoria Metrics time series database server. It installs the selected
version by downloading the binary release from Github and copying it to
`/usr/local/sbin` on the managed node. Scrape configuration is optional
and can be specified with the `scrape_configs` variable.
Tasks that configure the SELinux policy obviously only make sense if the
host uses SELinux. Similarly, if the host does not use FirewallD,
configuring firewall rules doesn't work.
The `/etc/collectd.d` directory is created by the RPM package on
machines running a Red Hat-based Linux distribution, but it may not
always be present on other machines.
In addition to ignoring particular types of filesystems, e.g. OverlayFS,
we can also ignore filesystems by their mount point. This could be
useful, for example, for bind-mounted directories, such as those used on
Kubernetes nodes.
By default, the *df* pluggin for collectd, which monitors filesystem
usage, collects data about all mounted filesystems. It can be
configured to ignore some filesystems, either by mount point, device, or
filesystem type. We will uses this capability to avoid collecting data
about OverlayFS mounts, because by definition, they do not represent a
real filesystem, but one or more other mounted filesystems. Collecting
data about these just creates useless metrics, especially on machines
that run containers.
Some machines, such as the nodes in the Kubernetes cluster, do not use
*firewalld*. For these machines, we need to skip the `firewalld` tasks,
as they will fail. The `host_uses_firewalld` variable can be set to
`False` for these machines to do so.
*nvr1.pyrocufflink.blue* is the new video recording server. It is a
1U rack-mounted physical machine based on the [Jetway
JBC150F596-3160-B][0] barebone system. It replaces
*nvr0.pyrocufflink.blue* in this role.
[0]: https://www.jetwaycomputer.com/JBC150F596.html
Podman 4 puts lock files in the configuration directory for [some stupid
reason][0]. There are so many issues here!
* It is now impossible to run `podman` as root with a read-only `/etc`.
* Why does it need the lock file at all when using `--network=host`?
Luckily, we can work around it fairly easily by mounting a tmpfs
filesystem over the directory it wants to put the lock file in. This
pretty much defeats the purpose of having a lock file, but it's likely
not needed anyway.
[0]: 836fa4c493
The *sensors* plugin for collectd reads temperature information from the
I²C/SMBus using *lm_sensors*. Naturally, it is only useful on physical
machines, so it is not installed or enabled by default.
Instead of a simple list of disabled plugins, hosts and host groups can
now control whether plugins are enabled or disabled using the
`collectd_plugins` map. The map keys are plugin names, and the values
are booleans indicating if the plugin is enabled.
Using this mechanism, some plugins can be disabled by default (e.g. the
*md* plugin), and enabling them per host or per host group is simpler.
Mosquitto can save retained messages, persistent clients, etc. to the
filesystem and restore them at startup. This allows state to be
maintained even after the process restarts.
The KDC service, as managed by Samba, continuously logs to two files
that need to be rotated. The upstream configuration for logrotate only
manages one of these files, and does not correctly signal the service
after rotating, as it expects the service to be managed by systemd
instead of Samba. As such, we need to adjust the configuration to
handle both files and send SIGHUP directly to the process.
Promoting the new site I have been working on at *dustin.hatch.is* to my
main domain, *dustin.hatch.name*. The new site is just static content,
generated and uploaded by a Jenkins job.
Finally have a certificate for *dustin.hatch.name* now, too!
This resolves two issues with fetching the Proton VPNserver list:
1. If a connection error occurs when fetching the list, it will be
ignored, just as with HTTP errors
2. If any errors are encountered when fetching the list, and a valid
cache was loaded, its contents are returned, regardless of the
timestamp of the cache file.
To handle the RSVP form on *dustinandtabitha.com*, we are going to use
*formsubmit*. It runs on the same machine that hosts the website, so
there's no dealing with CORS. The */submit/rsvp* path, which is proxied
to the backend, is the RSVP form's target.
*formsubmit* is a simple, customizable HTML for submission handler. I
designed it for Tabitha to use to collect information from forms on her
websites. Notably, we will use it for the RSVP page on our wedding
invitation site.
The state history database is entirely too big. It takes over an hour
to create a backup of it, which usually causes BURP to time out. The
data it stores isn't particularly interesting anyway. Instead of trying
to back it up and ultimately not getting any backup at all, we'll just
skip it altogether to ensure we have a consistent backup of everything
else that is actually important.
Uploading large files can take a very long time. If the process takes
longer than the configured timeout in Apache, it will be aborted and the
client will receive an HTTP 504 Gateway Timeout error. Increasing the
timeout will help alleviate this for files up to a certain size.
Notably, it now lets me upload Signal backups without errors.
Nextcloud thinks it needs to run the upgrade/migration tool if the
version number in its configuration file does not match the running
version. It then updates the config file with the correct version. The
next time the configuration policy is applied, however, the version will
revert back to whatever is set in the template. This will re-trigger
the upgrade notification.
To avoid this problem, we now set the version in the configuration file
dynamically. Nextcloud writes its version number in a constant in
`version.php`.
Nextcloud uses double backslashes in its fully-qualified path names.
Although single backslashes work, the application will replace them,
leading to a constant conflict between itself and the Ansible template.
The first time launching a container after pulling a new image, it can
take several minutes for the container to actually start. Podman has to
set up the overlay filesystems, which is very slow on a Raspberry Pi.
With the default start timeout, systemd may end up killing the process
before the container is completely set up. Thus, we need to increase
the timeout to ensure there is plenty of time for Podman to work.
Processes running in containers only have access to a limited set of
devices, based on their SELinux type label. The USB serial devices
exposed by the Zwave and Zigbee adapters are not labelled correctly by
default to allow them to be used in containers.
Using `chcon` to change the type label of the device before starting the
container seems to work, but seems a bit kludgy. It would probably be
better to use a SELinux file context rule and/or a udev rule to ensure
the label is set correctly when the device node is created.
Although Home Assistant itself will start fine if the network is not yet
available, some integrations will not. Notably, the Matrix integration
will fail to load if it cannot contact the homeserver when it is first
initialized. To avoid this problem, we can just delay starting Home
Assistant until the network is available.
Before the `burp` tool gained the `-Q` option, the only way to disable
the progress counter was through the configuration file. Since we do
not want any output from automatic backups (except of course
catastrophic failures), since it would end up being e-mailed by cron,
the progress counter had to be disabled globally. This meant that
on-demand runs on a terminal could not have a progress counter, which
was pretty disappointing.
Now that `burp` has `-Q`, this is no longer the case. Scheduled backups
can run with `-Q`, but ad-hoc runs can omit it to get a progress
counter.
Send logs to the systemd journal for easier viewing and disable logging
to a file. Also, the `samba_dc_log_level` variable can control the log
level (0-10, 0 being off, 10 being insane debugging).
Docker is effectively deprecated by Fedora/Red Hat. It is a pain in the
ass to work with anyway. Podman integrates better with systemd, and is
in general more aligned with how I prefer to deploy and manage
applications.
I am following the same pattern here that I have used for Home
Assistant, ZWaveJS2MQTT, etc. The systemd service starts the container
with `podman`, passing the necessary arguments for UID/GID mapping, etc.
Note that, by default, Vaultwarden expects to be able to bind to port
80; since the container is unprivileged, we have to configure it (or
rather, its embedded HTTP server [Rocket](https://rocket.rs)) to listen
on a different port. We also configure it to listen only on the
loopback, since it is being proxied by Apache to the outside network.
To migrate the data from the Docker volume, we just have to copy the
files and fix their ownership.
The *bitwarden_rs* project was recently renamed to *Vaultwarden*, so I
took this opportunity to update the name in most places within the
*bitwarden_rs* role.
Sometimes I need to configure a machine to be a domain member without
actually adding it to the domain. Now I can by running
`ansible-playbook` with `--skip-tags domain-join`
I honestly don't remember why the `use rfc2307` setting was only enabled
on the first DC. All DCs seem to need this setting in order to use the
UID/GID numbers from the directory, instead of using auto-generated
numbers.
If the remote address configuration for strongSwan is not valid when the
Proton VPN watchdog starts, it will now regenerate it immediately. This
can happen, for example, if the Internet has been down for a while, and
the watchdog has iterated through all of the servers in the cache.
Restarting the service will now force it to reconfigure the tunnel and
bring the VPN back up.
The `collectd-version` script uses the *collectd* UNIX socket to send
custom values to *collectd* to track the OS version. Since these values
obviously cannot change while the system is running, the values are
specified with a very long interval. This avoids having to continuously
insert the values, either with a long-running process or by repeatedly
running a script. The values only need to be inserted once when
*collectd* starts.
All values sent to *collectd* must have an associated type. The type
defines the acceptable range of values. Types are defined in a simple
text file database. *collectd* loads all of the databases specified by
`TypesDB` directives in its configuration file. When configuring a
custom types database, the default database needs to be specified
explicitly; it will not be loaded automatically if there are any
`TypesDB` directives in the configuration.
The *unixsock* plugin for *collectd* provides a socket-based interface
that other software can use to communicate with *collectd*. Notably,
this can be used to publish custom values, query existing values, and
flush caches.
The socket is created at `/run/collectd/socket`. The `/run/collectd`
directory is managed by systemd; it will be created automatically when
the service starts and cleaned up when it stops.
The *collectd-prometheus* role now has a
`collectd_prometheus_allow_outsize` variable. This variable controls
whether or not external hosts are allowed to scrape data from *collectd*.
When set to `false`, as is the default value, *collectd* will be
configured to listen on the loopback interface only, and the TCP port
will not be opened in the firewall.
Synapse supports exporting metrics in Prometheus format. It can do this
either as part of the main server, or in a separate listener. I chose
to use a separate listener so that the metrics are not exposed
publicly.
The *processes* plugin for collectd can be configured to monitor
additional information about specific processes. By specifying one or
more `Process` or `ProcessMatch` directives in the plugin configuration,
collectd will start monitoring the listed processes in detail.
The `collectd_processes` Ansible variable can contain a list of
processes to monitor. Each item must at least have a `name` property,
and may also have a `regex` property. If the latter is present, a
`ProcessMatch` directive will be emitted instead of a `Process`
directive.
The *base* role will now set the password for the *root* user, if the
`root_password_hash` variable is defined. This ensures that there is a
way to log into machines directly, even if other authentication
mechanisms like Active Directory are unavailable.
The *serial-console* Ansible role enables and starts a systemd service
unit to activate a console getty on the specified serial console device
(by default: ttyS0). This is particularly useful for virtual machines,
allowing one to control them in absence of a graphical VM management
tool.
Filesystems like NFS and CIFS require "helper" utilities (i.e.
`mount.nfs` and `mount.cifs`, respectively). These need to be installed
in order for a system to be able to mount those filesystems.
The current shared storage system uses NFSv4, and as such, the
*nfs-utils* package needs to be installed on the VM hosts.
With the transition away from *dhcpcd* on the VM hosts, there is no
longer any need for a custom wait script that must run prior to
attempting to mount the shared filesystem. This dramatically simplifies
the configuration necessary for shared storage.
I don't really see any reason why the shared storage configuration needs
to be managed by a separate role. The *vmhost* role is not really
generic anyway, and will probably not work for any other VM host
deployment besides the two machines running now. As such, I think it
makes sense to move the task to mount the shared filesystem into the
*vmhost* role and drop the *dch-storage-net* role.
The *libvirt-daemon-driver-network* package provides support for
managing virtual networks with libvirt. It is necessary in order to use
managed networks in VM configuration, as opposed to directly specifying
VM network interfaces in their domain configuration.
*systemd-networkd* is (currently) my preferred way to manage network
interfaces on machines running Fedora. The *systemd-networkd* role
provides a generic way to configure network links, devices, and
interfaces, using Ansible variables to generate network unit
configuration files.
The `collectd_df` variable can be used to configure the *df* plugin for
collectd. It should contain a map on key-value pairs that correspond
exactly to the plugin's configuration options.
*nvr0.pyrocufflink.blue* hosts Frigate. It is deployed on a separate
subnet, for two reasons:
* To avoid streaming video from the cameras through the firewall
* To prevent any hosts on the LAN except Home Assistant from
communicating with Frigate, since it does not have any kind of
authentication or access control
Frigate is an NVR that uses machine learning to detect objects on camera
in real time. It integrates with Home Assistant to expose sensors which
can be used for automation, etc.
The only official way to deploy Frigate is with a container, so we use
Podman and systemd to manage it.
The production deployment of *dnsmasq* for Home Assistant has deviated
from how the *hass-dhcp* role configures it. Bringing the role back in
sync with how things really are.
ZwaveJS2Mqtt includes a very powerful web-based UI for configuring and
controlling the Z-Wave network. This functionality is no longer
available within Home Assistant itself, so being able to access the
ZwaveJS2Mqtt UI is crucial to operating the network.
I wanted to make the UI available at */zwave/*, which requires using
*mod_rewrite* to conditionally proxy requests based on the `Connection`
HTTP header, since the UI passes both HTTP and WebSocket requests to the
same paths. *mod_rewrite* configuration is not inherited from the main
server configuration to virtual hosts, so the
`RewriteRule`/`RewriteCond` directives have to be specified within the
`<VirtualHost>` block. This means that the Home Assistant proxy
configuration has to be within its own virtual host, and the
Zwavejs2Mqtt configuration has to be there as well.
*hass2.pyrocufflink.blue* is a Raspberry Pi Compute Module 4-based
system, currently mounted in a WaveShare CM4 Mini Base Board (A). With
an NVMe SSD for primary storage, it runs significantly faster than a
standard Raspberry Pi 4, and blows the old Raspberry Pi 3-based Home
Assistant deployment out of the water. It has a Zooz 700 series Z-Wave
Plus S2 USB stick and a ConBee II Zigbee USB stick attached to its USB
2.0 ports. It runs a customized Fedora Minimal distribution.
Zigbee2MQTT is very similar to ZwaveJS2Mqtt: it is a daemon process that
communicates with the Zigbee radio and integrates with Home Assistant
using MQTT. Naturally, I decided to deploy it in the same way as
ZwaveJS2Mqtt, using a systemd unit to run it in a container with Podman.
Mosquitto 2.x included two significant changes from 1.6:
* There is no longer a "default" listener; all listeners are configured
in the same way
* The daemon drops privileges *before* reading TLS certificates and
private keys
Home Assistant no longer recommends using the built-in libopenzwave
integration for communicating with Z-Wave devices. Evidently, OpenZWave
is no longer maintained, and community efforts have shifted toward
Z-Wave JS.
Z-Wave JS is architecturally much different than the legacy Z-Wave
integration. Instead of running the network controller inside the Home
Assistant process, a separate daemon communicates with the Z-Wave radio.
Home Assistant integrates with that daemon using a WebSockets API. This
has the advantage of decoupling the network operation from the lifecycle
of the Home Assistant process: restarting Home Assistant (e.g. to load
new configuration changes) does not take the Z-Wave network offline.
ZwaveJS2Mqtt is a distribution of the Z-Wave JS daemon, as well as a
web-based user interface for configuring it. Although its name implies
that it uses MQTT for communication, this feature is actually optional,
and the native WebSockets API can still be used for integration with
Home Assistant.
I decided to follow the same deployment pattern for ZwaveJS2Mqtt as for
Home Assistant itself: run the application from a container image using
Podman. This of course simplifies the installation of the application
significantly, leaving most of that work up to the maintainer of the
container image. Podman provides the container runtime, managing the
privileges, etc. The systemd service unit starts Podman, configuring an
ephemeral container on each run. The container uses the default network
namespace, avoiding the unnecessary overhead of port mapping. It uses
Podman's "rootless" mode, via the `--uidmap` and `--gidmap` arguments,
mapping users inside the container, including root, to unprivileged
users on the host. The Z-Wave radio, which is specified by the
`zwavejs_device` Ansible variable, is passed into the container via the
`--device` argument.
Installing Home Assistant in a Python virtualenv is rather tedious,
especially on non-x86 machines. The main issue is Python packages that
include native extensions, as many of these do not have binary wheels
available for aarch64, etc. on PyPI. Thus, to install these, they have
to be built from source, which then requires the appropriate development
packages to be installed. Additionally, compiling native code on a
Raspberry Pi is excruciatingly slow. I have considered various ways of
mitigating this, but all would require a substantial time investment,
both up front and ongoing, making them rather pointless. Eventually, I
settled on just deploying the official Home Assistant container image
with Podman.
Although Podman includes a tool for generating systemd service unit
files for running containers, I ended up creating my own for several
reasons. First and foremost, the generated unit files configure the
containers to run as *root*, but I wanted to run Home Assistant as an
unprivileged user. Unfortunately, I could not seem to get the container
to work when dropping privileges using the `User` directive of the unit.
Fortunately, `podman` has `--uidmap` and `--gidmap` arguments, which I
was able to use to map UID/GID 0 in the container to the *homeassistant*
user on the host. Another drawback of the generated unit files is that
they specify a "forking" type service, which is not really necessary.
Podman/conmon supports the systemd notify protocol, but the generator
has not been updated to make use of that yet.
Recent versions of Home Assistant are more strict with respect to how
reverse proxies are handled. In order to use one, it must be explicitly
listed in the configuration file. Therefore, the *homeassistant*
Ansible role will now create a stub `configuration.yaml`, based on the
one generated by Home Assistant itslf when it starts for the first time
on a new machine, that includes the appropriate configuration for a
reverse proxy running on the same machine. The stub configuration will
not overwrite an existing configuration file, so it is only useful when
deploying Home Assistant for the first time on a new machine.
Overall, although I think a 300+ MB container image is ridiculous,
deploying Home Assistant this way should make it a lot easier to manage,
especially when updating.
Zezere is the Fedora IoT device provisioning service. It is the
software that runs *provision.fedoraproject.org*, but it can be
self-hosted (if you can figure it out; there is no documentation
whatsoever).
The main use case for running Zezere locally is to automatically add
trusted SSH public keys to Fedora IoT devices, without depending on a
cloud service. This playbook sets up Zezere with the very minimal
configuration needed to meet this goal.
This commit introduces the *grafana* role and the corresponding
`grafana.yml` playbook. The role installs Grafana using the system
package manager, and configures the server (including LDAP
authentication).
Occasionally, ProtonVPN servers randomly reject the EAP authentication
credentials. When this happens, the tunnel fails and is not restarted
automatically by strongSwan. As such, the watchdog needs to react to
this event as well.
Since the Nextcloud configuration file is managed by the configuration
policy, all of the settings configurable through the web UI need to be
templated. One important group of settings is the outbound email
configuration. This can now be configured using the `nextcloud_smtp`
Ansible variable.
This simple role installs the *redis* package and starts the associated
service. It leaves the configuration as provided by upstream, at least
for now.
Fedora now includes a packaged version of Nextcloud. This will be
_much_ easier to maintain than the tarball-based distribution method.
There are some minor differences in how the Fedora package works,
compared to the upstream tarball. Notably, it puts the configuration
file in `/etc/` and makes it read-only, and it stores persistent data
separate from the application. These differences require modifications
to the Apache and PHP-FPM configuration, but the package also included
examples to make this easier. Since the `config.php` is read-only now,
it has to be managed by the configuration policy; it cannot be modified
by the Administration web UI.
One major problem with the current DNS-over-VPN implementation is that
the ProtonVPN servers are prone to random outages. When the server
we're using goes down, there is not a straightforward way to switch to
another one. At first I tried creating a fake DNS zone with A records
for each ProtonVPN server, all for the same name. This ultimately did
not work, but I am not sure I understand why. strongSwan would
correctly resolve the name each time it tried to connect, and send IKE
initialization requests to a different address each time, but would
reject the responses from all except the first address it used. The
only way to get it working again was to restart the daemon.
Since strongSwan is apparently not going to be able to handle this kind
of fallback on its own, I decided to write a script to do it externally.
Enter `protonvpn-watchdog.py`. This script reads the syslog messages
from strongSwan (via the systemd journal, using `journalctl`'s JSON
output) and reacts when it receives the "giving up after X tries"
message. This message indicates that strongSwan has lost connection to
the current server and has not been able to reestablish it within the
retry period. When this happens, the script will consult the cached
list of ProtonVPN servers and find the next one available. It keeps
track of the ones that have failed in the past, and will not connect to
them again, so as not to simply bounce back-and-forth between two
(possibly dead) servers. Approximately every hour, it will attempt to
refresh the server list, to ensure that the most accurate server scores
and availability are known.
*Mosquitto* implements an MQTT server. It is the recommended
implementation for using MQTT with Home Assistant.
I have added this role to deploy Mosquitto on the Home Assistant server.
It will be used to send data from custom sensors, such as the
temperature/pressure/humidity sensor connected to the living room wall
display.
Since there are no other plain HTTP virtual hosts, the one defined for
chmod777.sh became the "default." Since it explicitly redirects all
requests to https://chmod777.sh, it caused all non-HTTPS requests to be
redirected there, regardless of the requested name. This was
particularly confusing for Tabitha, as she frequently forgets to put
https://…, and would find herself at my stupid blog instead of
Nextcloud.
The *esp4* kernel module does not load automatically on Fedora. Without
this module, strongSwan can establish IKE SAs, but not ESP SAs. Listing
the module name in a file in `/etc/modules-load.d` configures the
*systemd-modules-load* service to load it at boot.
I believe the reason the VPN was not auto-restarting was because I had
incorrectly specified the `keyingtries` and `dpd_delay` configuration options.
These are properties of the top-level connection, not the child. I must
have placed them in the `children` block by accident.
The *cert* role must be defined as a role dependency now, so that the
role can define a handler to "listen" for the "certificate changed"
event. This change happened on *master*, before the *matrix* branch was
merged.
Graylog 3.3 is currently installed on logs0. Attempting to install the
*graylog-3.1-repository* package causes a transaction conflict, making
the task and playbook fail.
Before the advent of `ansible-vault`, and long before `certbot`/`lego`,
I used to keep certificate files (and especially private key files) out
of the Git repository. Now that certificates are stored in a separate
repository, and only symlinks are stored in the configuration policy,
this no longer makes any sense. In particular, it prevents the continuous
enforcement process from installing Let's Encrypt certificates that have
been automatically renewed.
The *websites/proxy-matrix* role configures the Internet-facing reverse
proxy to handle the *hatch.chat* domain. Most Matrix communication
happens over the default HTTPS port, and as such will be directed
through the reverse proxy.
The *synapse* role and the corresponding `synapse.yml` playbook deploy
Synapse, the reference Matrix homeserver implementation.
Deploying Synapse itself is fairly straightforward: it is packaged by
Fedora and therefore can simply be installed via `dnf` and started by
`systemd`. Making the service available on the Internet, however, is
more involved. The Matrix protocol mostly works over HTTPS on the
standard port (443), so a typical reverse proxy deployment is mostly
sufficient. Some parts of the Matrix protocol, however, involve
communication over an alternate port (8448). This could be handled by a
reverse proxy as well, but since it is a fairly unique port, it could
also be handled by NAT/port forwarding. In order to support both
deployment scenarios (as well as the hypothetical scenario wherein the
Synapse machine is directly accessible from the Internet), the *synapse*
role supports specifying an optional `matrix_tls_cert` variable. If
this variable is set, it should contain the path to a certificate file
on the Ansible control machine that will be used for the "direct"
connections (i.e. on port 8448). If it is not set, the default Apache
certificate will be used for both virtual hosts.
Synapse has a pretty extensive configuration schema, but most of the
options are set to their default values by the *synapse* role. Other
than substituting secret keys, the only exposed configuration option is
the LDAP authentication provider.
Since the *bitwarden_rs* relies on Docker for distribution and process
management (at least for now), it needs to ensure that the `docker`
service starts automatically.
Because the various "webapp.*" users' home directories are under
`/srv/www`, the default SELinux context type is `httpd_sys_content_t`.
The SSH daemon is not allowed to read files with this label, so it
cannot load the contents of these users' `authorized_keys` files. To
address this, we have to explicitly set the SELinux type to
`ssh_home_t`.
BIND sends its normal application logs (as opposed to query logs) to the
`default_debug` channel. By sending these log messages to syslog, they
can be routed and rotated using the normal system policies. Using a
separate dedicated log file just ends up consuming a lot of space, as it
is not managed by any policy.
I am not sure the point of having both `ssl_request_log` and
`ssl_access_log`. The former includes the TLS ciphers used in the
connection, which is not particularly interesting information. To save
space on the log volume of web servers using Apache, we should just stop
creating this log file.
Changing/renewing a certificate generally requires restarting or
reloading some service. Since the *cert* role is intended to be generic
and reusable, it naturally does not know what action to take to effect
the change. It works well for the initial deployment of a new
application, since the service is reloaded anyway in order for the new
configuration to be applied. It fails, however, for continuous
enforcement, when a certificate is renewed automatically (i.e. by
`lego`) but no other changes are being made. This has caused a number
of disruptions when some certificate expires and its replacement is
available but has not yet been loaded.
To address this issue, I have added a handler "topic" notification to
the *certs* role. When either the certificate or private key file is
replaced, the relevant task will "notify" a generic handler "topic."
This allows some other role to define a specific handler, which
"listens" for these notifications, and takes the appropriate action for
its respective service.
For this mechanism to work, though, the *cert* role can only be used as
a dependency of another role. That role must define the handler and
configure it to listen to the generic "certificate changed" topic. As
such, each of the roles that are associated with a certificate deployed
by the *cert* role now declare it as a dependency, and the top-level
playbooks only include those roles.
The *collectd-prometheus* role configures the *write_prometheus* plugin
for collectd. This plugin exposes data collected or received by the
collectd process in the Prometheus Exposition Format over HTTP. It
provides the same functionality as the "official" collectd Exporter
maintained by the Prometheus team, but integrates natively into the
collectd process, and is much more complete.
The main intent of this role is to provide a mechanism to combine the
collectd data from all Pyrocufflink hosts and insert it into Prometheus.
By configuring the collectd instance on the Prometheus server itself to
enable and use the *write_prometheus* plugin and to receive the
multicast data from other hosts, collectd itself provides the desired
functionality.
For hosts with multiple network interfaces, collectd may not send
multicast messages through the correct interface. To ensure that it
does, the `Interface` configuration option can be specified with each
`Server` option. To define this option, entries in the
`collectd_network_servers` list can now have an `interface` property.
The *collectd* role, with its corresponding `collectd.yml` playbook,
installs *collectd* onto the managed node and manages basic
configuration for it. By default, it will enable several plugins,
including the `network` plugin. The `collectd_disable_plugins` variable
can be set to a list names of plugins that should NOT be enabled.
The default configuration for the `network` plugin instructs *collectd*
to send metrics to the default IPv6 multicast group. Any host that has
joined this group and is listening on the specified UDP port (default
25826) can receive the data. This allows for nearly zero configuration,
as the configuration does not need to be updated if the name or IP
address of the receiver changes.
This configuration is ready to be deployed without any variable changes
to all Pyrocufflink servers. Once *collectd* is running on the servers,
we can set up a *collectd* instance to receive the data and store them
in a time series database (i.e. Prometheus).
Since Apache HTTPD does not have any built-in log rotation capability,
we need `logrotate`. Somewhere along the line, the *logrotate* package
stopped being installed by default. Additionally, with Fedora 30, it
changed from including a drop-in file for (Ana)cron to providing a
systemd timer unit.
The *logrotate* role will ensure that the *logrotate* package is
installed, and that the *logrotate.timer* service is enabled and
running. This in turn will ensure that `logrotate` runs daily. Of
course, since the systemd units were added in Fedora 30, machines to
which this role is applied must be running at least that version.
By listing the *logrotate* role as a dependency of the *httpd* role, we
can ensure that `logrotate` manages the Apache error and access log
files on any server that runs Apache HTTPD.
By default, strongSwan will only attempt key negotiation once and then
give up. If the VPN connection is closed because of a network issue, it
is unlikely that a single attempt to reconnect will work, so let's keep
trying until it succeeds.
The *motioneye* role installs motionEye on a Fedora machine using `pip`.
It configures Apache to proxy for motionEye for outside (HTTPS) access.
The official installation instructions and default configuration for
motionEye assume it will be running as root. There is, however, no
specific reason for this, as it works just fine as an unprivileged user.
The only minor surprise is that the `conf_path` configuration setting
must be writable, as this is where motionEye places generated
configuration for `motion`. This path does not, however, have to
include the `motioneye.conf` file itself, which can still be read-only.
This commit adds a new playbook, `protonvpn.yml`, and its supporting
roles *strongswan-swanctl* and *protonvpn*. This playbook configures
strongSwan to connect to ProtonVPN using IPsec/IKEv2.
With this playbook, we configure the name servers on the Pyrocufflink
network to route all DNS requests through the Cloudflare public DNS
recursive servers at 1.1.1.1/1.0.0.1 over ProtonVPN. Using this setup,
we have the benefit of the speed of using a public DNS server (which is
*significantly* faster than running our own recursive server, usually by
1-2 seconds per request), and the benefit of anonymity from ProtonVPN.
Using the public DNS server alone is great for performance, but allows
the server operator (in this case Cloudflare) to track and analyze usage
patterns. Using ProtonVPN gives us anonymity (assuming we trust
ProtonVPN not to do the very same tracking), but can have a negative
performance impact if its used for all Internet traffic. By combining
these solutions, we can get the benefits of both!
This commit adds two new variables to the *named* role:
`named_queries_syslog` and `named_rpz_syslog`. These variables control
whether BIND will send query and RPZ log messages to the local syslog
daemon, respectively.
BIND response policy zones (RPZ) support provides a mechanism for
overriding the responses to DNS queries based on a wide range of
criteria. In the simplest form, a response policy zone can be used to
provide different responses to different clients, or "block" some DNS
names.
For the Pyrocufflink and related networks, I plan to use an RPZ to
implement ad/tracker blocking. The goal will be to generate an RPZ
definition from a collection of host lists (e.g. those used by uBlock
Origin) periodically.
This commit introduces basic support for RPZ configuration in the
*named* role. It can be activated by providing a list of "response
policy" definitions (e.g. `zone "name"`) in the `named_response_policy`
variable, and defining the corresponding zones in `named_zones`.
Normally, Home Assistant uses a SQLite database for storing state
history. On a Raspberry Pi with only an SD card for storage like
*hass1.pyrocufflink.blue*, this can become extremely slow, especially
for large data sets. To speed up features like history and logbook,
Home Assistant supports using an external database engine such as
PostgreSQL or MariaDB.
The *hassdb* role and corresponding `hassdb.yml` playbook deploys a
PostgreSQL server for Home Assistant to use. It needs only to create
the role and database, as Home Assistant manages its own schema.
The *postgresql-setup* service is no longer necessary, as upstream has
fixed the SELinux policy to allow root to invoke the `postgresql-setup`
command directly.
This commit adds a task to generate a PostgreSQL configuration file from
a template. Previously, the default configuration file generated by
`initdb` was sufficient, but in order to enable SSL connections, some
changes to it are required.
Naturally, SSL connections require a server certificate, so the
*postgresql-server* role will now also copy certificate files to the
managed node, if any.
Fedora has renamed the *strongswan* service to *strongswan-starter*.
The *strongswan* service now controls strongSwan via Vici, which uses a
different configuration format and is not compatible with the files in
`/etc/strongswan/ipsec.d`. As I am migrating everything to Wireguard
now, it does not make sense to rewrite all of the IPsec configuration in
this new format, so using the legacy format with the renamed service
makes more sense.
Because the Home Assistant user's home directory is on `/var`, Python
packages installed in the "user site" do not get the correct SELinux
labels and thus run in the wrong domain. This causes a lot of AVC
denials and other issues that prevent Home Assistant from working
correctly.
To resolve this issue, Home Assistant is now installed in a virtual
environment at `/usr/local/homeassistant`. This directory is still
owned by the Home Assistant user, allowing Home Assistant to manage
packages installed there. Since it is rooted under `/usr`, files are
labelled correctly and processes launched from executables there will
run in the correct domain.
The main network, *pyrocufflink.blue* (172.30.0.0/26) is now on VLAN 1
instead of VLAN 30. This changed when I replaced the Cisco SG200-26
with the UniFI Switch 48, to simplify configuration of all of the
Ubiquiti devices.
For some time, I have been trying to design a new configuration for the
reverse proxy on port 443 to correctly handle all the types of traffic
on that port. In the original implementation, all traffic on port 443
was forwarded by the gateway to HAProxy. HAproxy then used TLS SNI to
route connections to the correct backend server based the requested host
name. This allowed both HTTPS and OpenVPN-over-TLS to use the same
port, however it was not without issues. A layer 4 (TCP) proxy like
this "hides" the real source address of clients connecting to the
backend, which makes IP-based security (e.g. rate limiting, blacklists,
etc.) impossible at the application level. In particular, Nextcloud,
which implements rate limiting was constantly imposing login delays on
all users, because legitimate traffic was indistinguishable from
Internet background noise.
To alleviate these issues, I needed to change the proxy to operate in
layer 7 (HTTP) mode, so that headers like *X-Forwarded-For* and
*X-Forwarded-Host* could be added. Unfortunately, this was not easy,
because of the simultaneous requirement to forward OpenVPN traffic.
HAProxy can only do SNI inspection in TCP mode. So, I began looking for
an alternate way to proxy both HTTP and non-HTTP traffic on the same
port.
The HTTP protocol defines the `CONNECT` method, which is used by forward
proxies to tunnel HTTPS over plain HTTP. OpenVPN clients support
tunneling OpenVPN over HTTP using this method as well. HAProxy has
limited support for the CONNECT method (i.e. it doesn't do DNS
resolution, and I could find no way of restricting the destination) with
the `http_proxy` option, so I looked for alternate proxy servers that
had more complete support. Unsurprisingly, Apache HTTPD has the most
complete implementation of the `CONNECT` method (Nginx doesn't support
it at all). Using a name-based virtual host on port 443, Apache will
accept requests for *vpn.pyrocufflink.net* (using TLS SNI) and allow the
clients to use the `CONNECT` method to create a tunnel to the OpenVPN
server. This requires OpenVPN clients to a) use *stunnel* to wrap plain
HTTP proxy connections in TLS and b) configure OpenVPN to use the
TLS-wrapped HTTP proxy.
With Apache accepting all incoming connections, it was trivial to also
configure it as a layer 7 forward proxy for Bitwarden, Gitea, Jenkins,
and Nextcloud. Unfortunately, proxying for the other websites
(darkchestofwonders.us, chmod777.sh, dustin.hatch.name) was not quite as
straightforward. These websites would need to have an internal name
that differed from their external name, and thus a certificate valid for
that name. Rather than reconfigure all of these sites and set all of
that up, I decided to just move the responsibility for handling direct
connections from outside to the *web0* and eliminate the dedicated
reverse proxy. This was not possible before, because Apache could not
forward the OpenVPN traffic directly, but now with the forward proxy
configuration, there is no reason to have a separate server for these
connections.
Overall, I am pleased with how this turned out. It makes the OpenVPN
configuration simpler (*stunnel* no longer needs to run on the OpenVPN
server itself, since Apache is handling TLS termination), eliminates a
network hop for the websites, makes the reverse proxy configuration for
the other web applications much easier to understand, and resolves the
original issue of losing client connection information.
A name-based HTTP (not HTTPS) virtual host for *pyrocufflink.net* is
necessary to ensure requests are handled properly, now that there is
another HTTP virtual host (chmod777.sh) defined on the same server.
This commit updates the configuration for *pyrocufflink.net* to use the
wildcard certificate managed by *lego* instead of an unique certificate
managed by *certbot*.
*chmod777.sh* is a simple static website, generated by Hugo. It is
built and published from a Jenkins pipeline, which runs automatically
when new commits are pushed to Gitea.
The HTTPS certificate for this site is signed by Let's Encrypt and
managed by `lego` in the `certs` submodule.
This commit adds front-end and back-end configuration for HAProxy to
proxy HTTP/HTTPS for
*nextcloud.pyrocufflink.net*/*nextcloud.pyrocufflink.blue* to
*cloud0.pyrocufflink.blue*.
The *nextcloud* role installs Nextcloud from the specified release
archive, downloading it to the control machine first if necessary, and
configures Apache and PHP-FPM to serve it.
The `nextcloud.yml` playbook uses the *cert* role to install the X.509
certificate for the Nextcloud server, sets up Apache HTTPD with the
*apache* role, and installs Nextcloud using the *nextcloud* role.
The host *cloud0.pyrocufflink.blue* is the Nextcloud server for
Pyrocufflink.
The *cert* role is intended to be a generic, reusable role to copy an
X.509 certificate and/or private key file to managed nodes. It is
intended to be included in a playbook with at least the `cert_src` and
`cert_dest` variables defined, e.g.:
```
- hosts: whatever
roles:
- role: cert
cert_src: whatever.cer
cert_dest: /path/to/whatever.cer
```
the `haproxy_ssl_default_bind_options` variable is not defined for
machines running Fedora, because this parameter is not used in the
default configuration file there.
I seem to have forgotten how I got the RPM for Gitea. I think I built
it, but I cannot find the spec file, nor the RPM package. Since this is
clearly not reproducible, I decided to switch to using the binary
provided by upstream for now, until either I or Fedora get around to
making a better RPM.
Installing Gitea from the upstream binary is simple: just download it
and copy it to `/usr/local/bin`. Of course, the OS user and systemd
unit have to be managed by configuration policy when it's installed this
way.
*burp1.pyrocufflink.blue* will replace *burp0.pyrocufflink.blue* as the
BURP server for Pyrocufflink. It is a physical machine (Fitlet), making
it simpler to manage the USB drives. The old virtual machine will be
decommissioned soon.
Ansible replaced the `version_compare` filter with a `version_compare`
test that does the same thing. The former is completely gone now,
causing the template to fail to render, so its usage of that filter
needs to be updated.
Using the generic *burp.pyrocufflink.blue* name will allow easier
transition to a new BURP server. However, since this is not the actual
name, it cannot be used for task delegation, so a separate variable is
required to store the real name of the BURP server. This is only used
during client deployment, and not by BURP itself.
The *graylog* role installs Graylog from the *graylog2.org* Yum
repository and manages basic server configuration. It augments the
default systemd unit to provide the `CAP_NET_BIND_SERVICE` capability to
the Graylog server process via ambient capabilities, thereby allowing
the server to bind to the privileged Syslog UDP port.
The `Alias` configuration for Certbot needs to be configured before any
other locations, to ensure the `/.well-known` path is always served from
the local filesystem. If another drop-in configuration file (e.g.
`bitwarden.conf`) is ordered before it, it may override this
configuration and prevent Let's Encrypt from working.
In order to allow Jenkins to connect to the Docker daemon socket, the
socket must be owned by the *docker* group, and the *jenkins* user must
be a member of it.
This commit adds an HAProxy backend for Bitwarden, and adds ACL rules to
the frontend to proxy traffic to *bitwarden.pyrocufflink.blue* or
*bitwarden.pyrocufflink.net* to it.
Since the same certificate is used for LDAPS and RADIUS (EAP-TLS), it
makes more sense to store it only once, with the later file as a symlink
to the former.
This commit configures *bw0.pyrocufflink.blue* as a BURP client, so that
the Bitwarden data can be backed up. A pre-backup script is used to
take a consistent snapshot of the SQLite database before copying it to
the BURP server.
The BURP server runs as user *burp*, and nas such, requires that the
client-specific configuration files be owned by that user so they can be
read when a client connects.
Newer versions of the BURP client require `status_port` to be set. This
commit updates the `burp.conf.j2` template to more closely match the
default configuration shipped with the *burp* package, including setting
this new value.
Newer versions of Gitea need a JWT secret for Oauth2. Gitea will
attempt to generate one at startup if it is not already specified in the
configuration file, but this will fail since the file is not writable by
the user running the service. As such, it must be set via configuration
policy.
The point of the "wheel host" is to serve as a repository of Python
packages (wheels) built by Jenkins for consumption by `pip` et al. For
applications and libraries that do not provide all of their dependencies
as binary packages, this makes a convenient way to install them without
requiring all of the build tools and dependencies on the destination
machine.
The idea here is that a Jenkins job runs `pip wheel` for a distribution
package name or `requirements.txt` file and then uploads the resulting
wheel files using `rsync`. Apache is configured to serve the upload
directory with an index compatible with `pip`'s `--find-links`.
The *hass-dhcp* role installs dnsmasq and configures it to serve DHCP
requests on the Home Assistant network. Since this network is not
routed, the regular DHCP relay/server setup will not work.
This commit adds a systemd unit to enable the Kernel Same-page Merging
daemon on VM hosts. This allows much greater virtual machine density,
especially when many VMs are running the same guest OS.
Debian does not support system-wide SSL cipher suite profiles of course,
so these options need to be specified explicitly when deploying HAProxy
on Debian-based machines.
This commit updates the net-ifaces scripts for both *vmhost0* and
*vmhost1* to create VLAN and bridge interfaces for the Management and
Home Assistant networks.
The *taiga* role installs the three components of Taiga:
* taiga-back
* taiga-events
* taiga-front
*taiga-back* is a Python application. Its dependencies are installed via
`pip` in the *taiga* user's site-packages, and the application itself is
installed by unpacking the archive. *taiga-events* is a Node.js
application. Its dependencies are installed by `npm`, and is itself
installed by unpacking the archive. Finally, *taiga-front* is a
single-page browser application that is installed by unpacking the
archive, and served by Apache.
Taiga requires PostgreSQL and RabbitMQ.
The *websites/pyrocufflink.net* role configures the public web server to
host *pyrocufflink.net*. This site has two functions:
* It redirects `/` to http://dustin.hatch.name/
* It proxies user home directories (i.e. /~dustin/) to the file server
The `ServerName` directive needs to be set inside the default SSL vhost,
as this property does not get inherited from the global configuration,
and it is needs to be set in order for SNI to work correctly.