The `TrustedUserCAKeys` setting for *sshd(8)* tells the server to accept
any certificates signed by keys listed in the specified file.
The authenticating username has to match one of the principals listed in
the certificate, of course.
This role is applied to all machines, via the `base.yml` playbook.
Certificates issued by the user CA managed by SSHCA will therefore be
trusted everywhere. This brings us one step closer to eliminating the
dependency on Active Directory/Samba.
Normal users do not need shell access to the file server, and certainly
should not be allowed to e.g. forward ports through it. Using a `Match`
block, we can apply restrictions to users who do not need administrative
functionality. In this case, we restrict everyone who is not a member
of the *Server Admins* group in the PYROCUFFLINK AD domain.
By default, `sudo` requires users to authenticate with their passwords
before granting them elevated privileges. It can be configured to
allow (some) users access to (some) privileged commands without
prompting for a password (i.e. `NOPASSWD`), however this has a real
security implication. Disabling the password requirement would
effectively grant *any* program root privileges. Prompting for a
password prevents malicious software from running privileged commands
without the user knowing.
Unfortunately, handling `sudo` authentication for Ansible is quite
cumbersome. For interactive use, the `--ask-become-pass`/`-K` argument
is useful, though entering the password for each invocation of
`ansible-playbook` while iterating on configuration policy development
is a bit tedious. For non-interactive use, though, the password of
course needs to be stored somewhere. Encrypting it with Ansible Vault
is one way to protect it, but it still ends up stored on disk somewhere
and needs to be handled carefully.
*pam_ssh_agent_auth* provides an acceptable solution to both issues. It
is better than disabling `sudo` authentication entirely, but a lot more
convenient than dealing with passwords. It uses the calling user's SSH
agent to assert that the user has access to a private key corresponding
to one of the authorized public keys. Using SSH agent forwarding, that
private key can even exist on a remote machine. If the user does not
have a corresponding private key, `sudo` will fall back to normal
password-based authentication.
The security of this solution is highly dependent on the client to store
keys appropriately. FIDO2 keys are supported, though when used with
Ansible, it is quite annoying to have to touch the token for _every
task_ on _every machine_. Thus, I have created new FIDO2 keys for both
my laptop and my desktop that have the `no-touch-required` option
enabled. This means that in order to use `sudo` remotely, I still need
to have my token plugged in to my computer, but I do not have to tap it
every time it's used.
For Jenkins, a hardware token is obviously impossible, but using a
dedicated key stored as a Jenkins credential is probably sufficient.
Unfortunately, the automatic transfer switch does not seem to work
correctly. When the standby source is a UPS running on battery, it does
*not* switch sources if the primary fails. In other words, when the
power is out and both UPS are running on battery, when the first one
dies, it will NOT switch to the second one. It has no trouble switching
when the second source is mains power, though, which is very strange.
I have tried messing with all the settings including nominal input
voltage, sensitivity, and frequency tolerence, but none seem to have any
effect.
Since it is more important for the machines to shut down safely than it
is to have an extra 10-15 minutes of runtime during an outage, the best
solution for now is to configure the hosts to shut down as soon as the
first UPS battery gets low. This is largely a waste of the second UPS,
but at least it will help prevent data loss.
Increasing the delay after starting the Kubernetes cluster to hopefully
allow things to "settle down" enough that starting services on follow up
VMs doesn't time out.
Start Kubernetes earlier. Start Synapse later (it takes a long time to
start up and often times out when the VM hosts are under heavy load).
Start SMTP relay later as it's not really needed.
The network device for the test/*pyrocufflink.red* network is named
`br1`. This needs to match in the systemd-networkd configuration or
libvirt will not be able to attach virtual machines to the bridge.
`upsmon` is the component of [NUT] that monitors (local or remote) UPS
devices and reacts to changes in their state. Notably, it is
responsible for powering down the system when there is insufficient
power to the system.
Since *file0.pyrocufflink.blue* now hosts a couple of VirtualHosts,
accessing its HTTP server by the *files.pyrocufflink.blue* alias no
longer works, as Apache routes unknown hostnames to the first
VirtualHost, rather than the global configuration. To resolve this, we
must set `ServerName` to the alias.
The *ssh-host-certs* role, which is now applied as part of the
`base.yml` playbook and therefore applies to all managed nodes, is
responsible for installing the *sshca-cli* package and using it to
request signed SSH host certificates. The *sshca-cli-systemd*
sub-package includes systemd units that automate the process of
requesting and renewing host certificates. These units need to be
enabled and provided the URL of the SSHCA service. Additionally, the
SSH daemon needs to be configured to load the host certificates.
So it turns out Gitea's RPM package repository feature is less than
stellar. Since each organization/user can only have a single
repository, separating packages by OS would be extremely cumbersome.
Presumably, the feature was designed for projects that only build a
single PRM for each version, but most of my packages need multiple
builds, as they tend to link to system libraries. Further, only the
repository owner can publish to user-scoped repositories, so e.g.
Jenkins cannot publish anything to a repository under my *dustin*
account. This means I would ultimately have to create an Organization
for every OS/version I need to support, and make Jenkins a member of it.
That sounds tedious and annoying, so I decided against using that
feature for internal packages.
Instead, I decided to return to the old ways, publishing packages with
`rsync` and serving them with Apache. It's fairly straightforward to
set this up: just need a directory with the appropriate permissions for
users to upload packages, and configure Apache to serve from it.
One advantage Gitea's feature had over a plain directory is its
automatic management of repository metadata. Publishers only have to
upload the RPMs they want to serve, and Gitea handles generating the
index, database, etc. files necessary to make the packages available to
Yum/dnf. With a plain file host, the publisher would need to use
`createrepo` to generate the repository metadata and upload that as
well. For repositories with multiple packages, the publisher would need
a copy of every RPM file locally in order for them to be included in the
repository metadata. This, too, seems like it would be too much trouble
to be tenable, so I created a simple automatic metadata manager for the
file-based repo host. Using `inotifywatch`, the `repohost-createrepo`
script watches for file modifications in the repository base directory.
Whenever a file is added or changed, the directory containing it is
added to a queue. Every thirty seconds, the queue is processed; for
each unique directory in the queue, repository metadata are generated.
This implementation combines the flexibility of a plain file host,
supporting an effectively unlimited number of repositories with
fully-configurable permissions, and the ease of publishing of a simple
file upload.
AWS is going to begin charging extra for routable IPv4 addresses soon.
There's really no point in having a relay in the cloud anymore anyway,
since a) all outbound messages are sent via the local relay and b) no
messages are sent to anyone except me.
I've added a new Kubernetes worker node,
*k8s-aarch64-n1.pyrocufflink.blue*. This machine is a Raspberry Pi CM4
mounted on a Waveshare CM4-IO-Base A and clipped onto the DIN rail.
It's got 8 GB of RAM and 32 GB of eMMC storage. I intend to use it to
build container images locally, instead of bringing up cloud instances.
Zincati is the automatic update manager on Fedora CoreOS. It exposes
Prometheus metrics for host/update statistics, which are useful to track
the progress of automatic updates and identify update issues.
Zinciti actually exposes its metrics via a Unix socket on the
filesystem. Another process, [local_exporter], is required to expose
the metrics from this socket via HTTP so Prometheus can scrape them.
[local_exporter]: https://github.com/lucab/local_exporter
*collectd* is now running on *k8s-aarch64-n0.pyrocufflink.blue*,
exposing system metrics. As it is not a member of the AD domain, it has
to be explicitly listed in the `scrape_collectd_extra_targets` variable.
*nvr1.pyrocufflink.blue* has been migrated to Fedora CoreOS. As such,
it is no longer managed by Ansible; its configuration is done via
Butane/Ignition. It is no longer a member of the Active Directory
domain, but it does still run *collectd* and export Prometheus metrics.
Since Ubiquiti only publishes Debian packages for the Unifi Network
controller software, running it on Fedora has historically been neigh
impossible. Fortunately, a modern solution is available: containers.
The *linuxserver.io* project publishes a container image for the
controller software, making it fairly easy to deploy on any host with an
OCI runtime. I briefly considered creating my own image, since theirs
must be run as root, but I decided the maintenance burden would not be
worth it. Using Podman's user namespace functionality, I was able to
work around this requirement anyway.
Sometimes I need to connect to a machine when there is an AD issue (e.g.
domain controllers are down, clocks are out of sync, etc.) but I can't
do it from my desktop.
When the RAID array is being resynchronized after the archived disk has
been reconnected, md changes the disk status from "missing" to "spare."
Once the synchronization is complete, it changes from "spare" to
"active." We only want to trigger the "disk needs archived" alert once
the synchronization process is complete; otherwise, both the "disks need
swapped" and "disk needs archived" alerts would be active at the same
time, which makes no sense. By adjusting the query for the "disk needs
archived" alert to consider disks in both "missing" and "spare" status,
we can delay firing that alert until the proper time.
The Frigate server has a RAID array that it uses to store video
recordings. Since there have been a few occasions where the array has
suddenly stopped functioning, probably because of the cheap SATA
controller, it will be nice to get an alert as soon as the kernel
detects the problem, so as to minimize data loss.
Most of the Synapse server's state is in its SQLite database. It also
has a `media_store` directory that needs to be backed up, though.
In order to back up the SQLite database while the server is running, the
database must be in "WAL mode." By default, Synapse leaves the database
in the default "rollback journal mode," which disallows multiple
processes from accessing the database, even for read-only operations.
To change the journal mode:
```sh
sudo systemctl stop synapse
sudo -u synapse sqlite3 /var/lib/synapse/homeserver.db 'PRAGMA journal_mode=WAL;'
sudo systemctl start synapse
```
Kubernetes exports a *lot* of metrics in Prometheus format. I am not
sure what all is there, yet, but apparently several thousand time series
were added.
To allow anonymous access to the metrics, I added this RoleBinding:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
```
MinIO exposes metrics in Prometheus exposition format. By default, it
requires an authentication token to access the metrics, but I was unable
to get this to work. Fortunately, it can be configured to allow
anonymous access to the metrics, which is fine, in my opinion.
The `journal2ntfy.py` script follows the systemd journal by spawning
`journalctl` as a child process and reading from its standard output
stream. Any command-line arguments passed to `journal2ntfy` are passed
to `journalctl`, which allows the caller to specify message filters.
For any matching journal message, `journal2ntfy` sends a message via
the *ntfy* web service.
For the BURP server, we're going to use `journal2ntfy` to generate
alerts about the RAID array. When I reconnect the disk that was in the
fireproof safe, the kernel will log a message from the *md* subsystem
indicating that the resynchronization process has begun. Then, when
the disks are again in sync, it will log another message, which will
let me know it is safe to archive the other disk.
This alert will fire once the MD RAID resynchronization process has
completed and both disks in the array are online. It will clear when
one disk is disconnected and moved to the safe.
When BURP fails to even *start* a backup, it does not trigger a
notification at all. As a result, I may not notice for a few days when
backups are not happening. That was the case this week, when clients'
backups were failing immediately, because of a file permissions issue on
the server. To hopefully avoid missing backups for too long in the
future, I've added two new alerts:
* The *no recent backups* alert fires if there have not been *any* BURP
backups recently. This may also fire, for example, if the BURP
exporter is not working, or if there is something wrong with the BURP
data volume.
* The *missed client backup* alert fires if an active BURP client (i.e.
one that has had at least one backup in the past 90 days) has not been
backed up in the last 24 hours.
Using a 30-day window for the `tlast_change_over_time` function
effectively "caps out" the value at 30 days. Thus, the alert reminding
me to swap the BURP backup volume will never fire, since the value will
never be greater than the 30-day threshold. Using a wider window
resolves that issue (though the query will still produce inaccurate
results beyond the window).
The `tls cafile` setting in `smb.conf` is not necessary. It is used for
verifying peer certificates for mutual TLS authentication, not to
specify the intermediate certificate authority chain like I thought.
The setting cannot simply be left out, though. If it is not specified,
Samba will attempt to load a file from a built-in default path, which
will fail, causing the server to crash. This is avoided by setting the
value to the empty string.
The `tlast_change_over_time` function needs an interval wide enough to
consider the range of time we are intrested in. In this case, we want
to see if the BURP volume has been swapped in the last thirty days, so
the interval needs to be `30d`.
*dc0.p.b* has been gone for a while now. All the current domain
controllers use LDAPS certificates signed by Let's Encrypt and include
the *pyrocufflink.blue* name, so we can now use the apex domain A record
to connect to the directory.
This alert counts how long its been since the number of "active" disks
in the RAID array on the BURP server has changed. The assumption is
that the number will typically be `1`, but it will be `2` when the
second disk synchronized before the swap occurs.
1. Grafana 8 changed the format of the query string parameters for the
Explore page.
2. vmalert no longer needs the http.pathPrefix argument when behind a
reverse proxy, rather it uses the request path like the other
Victoria Metrics components.
The way I am handling swapping out the BURP disk now is by using the
Linux MD RAID driver to manage a RAID 1 mirror array. The array
normally operates with one disk missing, as it is in the fireproof safe.
When it is time to swap the disks, I reattach the offline disk, let the
array resync, then disconnect and store the other disk.
This works considerably better than the previous method, as it does not
require BURP or the NFS server to be offline during the synchronization.
The rule is "if it is accessible on the Internet, its name ends in .net"
Although Vaultwarden can be accessed by either name, the one specified
in the Domain URL setting is the only one that works for WebAuthn.
Domain controllers only allow users in the *Domain Admins* AD group to
use `sudo` by default. *dustin* and *jenkins* need to be able to apply
configuration policy to these machines, but they are not members of said
group.
I changed the naming convention for domain controller machines. They
are no longer "numbered," since the plan is to rotate through them
quickly. For each release of Fedora, we'll create two new domain
controllers, replacing the existing ones. Their names are now randomly
generated and contain letters and numbers, so the Blackbox Exporter
check for DNS records needs to account for this.
Gitea package names (e.g. OCI images, etc.) can contain `/` charactres.
These are encoded as %2F in request paths. Apache needs to forward
these sequences to the Gitea server without decoding them.
Unfortunately, the `AllowEncodedSlashes` setting, which controls this
behavior, is a per-virtualhost setting that is *not* inherited from the
main server configuration, and therefore must be explicitly set inside
the `VirtualHost` block. This means Gitea needs its own virtual host
definition, and cannot rely on the default virtual host.
When I added the *systemd-networkd* configuration for the Kubernetes
network interface on the VM hosts, I only added the `.netdev`
configuration and forgot the `.network` part. Without the latter,
*systemd-networkd* creates the interface, but does not configure or
activate it, so it is not able to handle traffic for the VMs attached to
the bridge.
The *netboot/jenkins-agent* Ansible role configures three NBD exports:
* A single, shared, read-only export containing the Jenkins agent root
filesystem, as a SquashFS filesystem
* For each defined agent host, a writable data volume for Jenkins
workspaces
* For each defined agent host, a writable data volume for Docker
Agent hosts must have some kind of unique value to identify their
persistent data volumes. Raspberry Pi devices, for example, can use the
SoC serial number.
The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The firewall hardware is too slow to run the *prometheus_speedtest*
program. It always showed *way* lower speeds than were actually
available. I've moved the service to the Kubernetes cluster and it
works a lot better there.
*mtrcs0.pyrocufflink.red* is a Raspberry Pi CM4 on a Waveshare
CM4-IO-BASE-B carrier board with a NVMe SSD. It runs a custom OS built
using Buildroot, and is not a member of the *pyrocufflink.blue* AD
domain.
*mtrcs0.p.r* hosts Victoria Metrics/`vmagent`, `vmalert`, AlertManager,
and Grafana. I've created a unique group and playbook for it,
*metricspi*, to manage all these applications together.
In addition to ignoring particular types of filesystems, e.g. OverlayFS,
we can also ignore filesystems by their mount point. This could be
useful, for example, for bind-mounted directories, such as those used on
Kubernetes nodes.
There is no specific playbook or role for Kubernetes. All OS
configuration is done at install time via kickstart scripts, and
deploying Kubernetes itself is done (manually) using `kubeadm init` and
`kubeadm join`.
Adding a second camera to the back yard, on the North side of the porch,
to try and figure out how the possums keep getting under the porch even
with the chicken wire around it!
We're trying to discover how the possums are getting into and out of the
house. Let's enable continuous video recording from the back yard
camera so we can observe them and come up with a plan to get rid of
them.
To handle the RSVP form on *dustinandtabitha.com*, we are going to use
*formsubmit*. It runs on the same machine that hosts the website, so
there's no dealing with CORS. The */submit/rsvp* path, which is proxied
to the backend, is the RSVP form's target.
The state history database is entirely too big. It takes over an hour
to create a backup of it, which usually causes BURP to time out. The
data it stores isn't particularly interesting anyway. Instead of trying
to back it up and ultimately not getting any backup at all, we'll just
skip it altogether to ensure we have a consistent backup of everything
else that is actually important.
If Nextcloud does not have the Internet-facing reverse proxy listed in
its "trusted proxies" setting, it will mark all traffic as being from
the proxy itself. This breaks brute force detection, etc.
Transitioning from push-based to pull-based monitoring with
Prometheus/collectd. The *write_prometheus* plugin will be installed on
all hosts, and Prometheus will be configured to scrape them directly.
Having this option enabled dramatically improves the reliability of
collectd multicast traffic from physical machines and VMs on a separate
VM host from the receiving machine.
1. Set a password for *root* on all machines (useful for logging in via
serial console if network is down)
2. Set an authorized SSH key for root on all machines:
* For Fedora 34, use my FIDO2 security token key
* For all other hosts, use my ED25519 key
Filesystems like NFS and CIFS require "helper" utilities (i.e.
`mount.nfs` and `mount.cifs`, respectively). These need to be installed
in order for a system to be able to mount those filesystems.
The current shared storage system uses NFSv4, and as such, the
*nfs-utils* package needs to be installed on the VM hosts.
Originally, the network configuration for the VM networks and the
storage network was configured using the *netifaces* role. This has
effectively stopped working in recent versions of Fedora, as it sort of
relied on `dhcpcd`, which has not been maintained in Fedora for a while
and no longer behaves correctly. After evaluating *NetworkManager* as a
replacement, I decided that *systemd-networkd* is a more appropriate
solution.
There are effectively two "layers" of network configuration needed for
the VM hosts: the host-specific settings, and the common settings. The
host-specific settings include such properties as the IP address of the
management interface and the names of the physical ports that make up
the bonded interfaces. The common settings are the bonded interfaces,
the VLAN interfaces created on top of the bond, and the bridges that
provide access to VMs.
To configure the host-specific settings, each host simply needs the
appropriate `networkd_*` variables in its `host_vars` file. For the
common settings, we apply the *systemd-networkd* role again in the
`vmhost.yml` with different values for these variables. Thus,
effectively, `systemd-networkd.yml` manages the host-specific settings,
while `vmhost.yml` manages the common settings.
I couldn't get RTMP to work on the Back Yard camera because the `ffmpeg`
process kept crashing:
```
ffmpeg.back_yard.clips_rtmp ERROR : av_interleaved_write_frame(): Connection reset by peer
ffmpeg.back_yard.clips_rtmp ERROR : [flv @ 0x5562090c8ec0] Failed to update header with correct duration.
ffmpeg.back_yard.clips_rtmp ERROR : [flv @ 0x5562090c8ec0] Failed to update header with correct filesize.
ffmpeg.back_yard.clips_rtmp ERROR : Error writing trailer of rtmp://127.0.0.1/live/back_yard: Connection reset by peer
watchdog.back_yard INFO : Terminating the existing ffmpeg process...
watchdog.back_yard INFO : Waiting for ffmpeg to exit gracefully...
```
I thought increasing the value of `--shm-size` argument for `podman`
would help, but even going as high as 1024 mebibytes did not resolve the
problem.
Ultimately, I decided that it is not really necessary to view the full
4k stream in real time. The back yard camera supports three streams, so
I set them all up for different roles. I briefly considered using a
single 1080p stream for both object detection and RTMP streaming, but
this consumed considerable CPU time, so I decided against it for now. I
may re-evaluate that option if I decide to purchase a TPU.
*nvr0.pyrocufflink.blue* hosts Frigate. It is deployed on a separate
subnet, for two reasons:
* To avoid streaming video from the cameras through the firewall
* To prevent any hosts on the LAN except Home Assistant from
communicating with Frigate, since it does not have any kind of
authentication or access control
For hosts that cannot send metrics via multicast (e.g. because they are
on a different subnet), *collectd* needs to listen on the all-hosts
unicast address.
ZwaveJS2Mqtt includes a very powerful web-based UI for configuring and
controlling the Z-Wave network. This functionality is no longer
available within Home Assistant itself, so being able to access the
ZwaveJS2Mqtt UI is crucial to operating the network.
I wanted to make the UI available at */zwave/*, which requires using
*mod_rewrite* to conditionally proxy requests based on the `Connection`
HTTP header, since the UI passes both HTTP and WebSocket requests to the
same paths. *mod_rewrite* configuration is not inherited from the main
server configuration to virtual hosts, so the
`RewriteRule`/`RewriteCond` directives have to be specified within the
`<VirtualHost>` block. This means that the Home Assistant proxy
configuration has to be within its own virtual host, and the
Zwavejs2Mqtt configuration has to be there as well.
Mosquitto 2.x included two significant changes from 1.6:
* There is no longer a "default" listener; all listeners are configured
in the same way
* The daemon drops privileges *before* reading TLS certificates and
private keys
Although configuration policy is not yet available for Prometheus
itself, the `collectd.yml` playbook also uses the *prometheus* host
group. Specifically, hosts in this group are configured to receive
collectd data from other hosts and expose those data through the
`write_prometheus` plugin.
This commit introduces the *grafana* role and the corresponding
`grafana.yml` playbook. The role installs Grafana using the system
package manager, and configures the server (including LDAP
authentication).
Since the Nextcloud configuration file is managed by the configuration
policy, all of the settings configurable through the web UI need to be
templated. One important group of settings is the outbound email
configuration. This can now be configured using the `nextcloud_smtp`
Ansible variable.
Fedora now includes a packaged version of Nextcloud. This will be
_much_ easier to maintain than the tarball-based distribution method.
There are some minor differences in how the Fedora package works,
compared to the upstream tarball. Notably, it puts the configuration
file in `/etc/` and makes it read-only, and it stores persistent data
separate from the application. These differences require modifications
to the Apache and PHP-FPM configuration, but the package also included
examples to make this easier. Since the `config.php` is read-only now,
it has to be managed by the configuration policy; it cannot be modified
by the Administration web UI.
*Mosquitto* implements an MQTT server. It is the recommended
implementation for using MQTT with Home Assistant.
I have added this role to deploy Mosquitto on the Home Assistant server.
It will be used to send data from custom sensors, such as the
temperature/pressure/humidity sensor connected to the living room wall
display.
When there is a network issue that prevents DNS names from being
resolved, it can be difficult to troubleshoot. For example, last night,
the Samba domain controller crashed, so *pyrocufflink.blue* names were
unavailable. Furthermore, the domain controller VM was apparently
locked up, so I could not SSH into it directly, and it needed to be
rebooted. Since the VM host's name did not resolve, I could not find
its address to log into it and reboot the VM. I resorted to scanning
the SSH keys of every IP address on the network until I found the one
that matched the cached key in ~/.ssh/known_hosts. This was cumbersome
and annoying.
Assigning DHCP reservations to the VM hosts will ensure that when a
situation like this arises again, I can quickly connect to the correct
VM host and manage its virtual machines, as its address is recorded in
the configuration policy.
The *synapse* role and the corresponding `synapse.yml` playbook deploy
Synapse, the reference Matrix homeserver implementation.
Deploying Synapse itself is fairly straightforward: it is packaged by
Fedora and therefore can simply be installed via `dnf` and started by
`systemd`. Making the service available on the Internet, however, is
more involved. The Matrix protocol mostly works over HTTPS on the
standard port (443), so a typical reverse proxy deployment is mostly
sufficient. Some parts of the Matrix protocol, however, involve
communication over an alternate port (8448). This could be handled by a
reverse proxy as well, but since it is a fairly unique port, it could
also be handled by NAT/port forwarding. In order to support both
deployment scenarios (as well as the hypothetical scenario wherein the
Synapse machine is directly accessible from the Internet), the *synapse*
role supports specifying an optional `matrix_tls_cert` variable. If
this variable is set, it should contain the path to a certificate file
on the Ansible control machine that will be used for the "direct"
connections (i.e. on port 8448). If it is not set, the default Apache
certificate will be used for both virtual hosts.
Synapse has a pretty extensive configuration schema, but most of the
options are set to their default values by the *synapse* role. Other
than substituting secret keys, the only exposed configuration option is
the LDAP authentication provider.
Since DNS only allowed to be sent over the VPN, it is not possible to
resolve the VPN server name unless the VPN is already connected. This
naturally creates a chicken-and-egg scenario, which we can resolve by
manually providing the IP address of the server we want to connect to.
This commit adds a new playbook, `protonvpn.yml`, and its supporting
roles *strongswan-swanctl* and *protonvpn*. This playbook configures
strongSwan to connect to ProtonVPN using IPsec/IKEv2.
With this playbook, we configure the name servers on the Pyrocufflink
network to route all DNS requests through the Cloudflare public DNS
recursive servers at 1.1.1.1/1.0.0.1 over ProtonVPN. Using this setup,
we have the benefit of the speed of using a public DNS server (which is
*significantly* faster than running our own recursive server, usually by
1-2 seconds per request), and the benefit of anonymity from ProtonVPN.
Using the public DNS server alone is great for performance, but allows
the server operator (in this case Cloudflare) to track and analyze usage
patterns. Using ProtonVPN gives us anonymity (assuming we trust
ProtonVPN not to do the very same tracking), but can have a negative
performance impact if its used for all Internet traffic. By combining
these solutions, we can get the benefits of both!
This commit adds two new variables to the *named* role:
`named_queries_syslog` and `named_rpz_syslog`. These variables control
whether BIND will send query and RPZ log messages to the local syslog
daemon, respectively.
BIND response policy zones (RPZ) support provides a mechanism for
overriding the responses to DNS queries based on a wide range of
criteria. In the simplest form, a response policy zone can be used to
provide different responses to different clients, or "block" some DNS
names.
For the Pyrocufflink and related networks, I plan to use an RPZ to
implement ad/tracker blocking. The goal will be to generate an RPZ
definition from a collection of host lists (e.g. those used by uBlock
Origin) periodically.
This commit introduces basic support for RPZ configuration in the
*named* role. It can be activated by providing a list of "response
policy" definitions (e.g. `zone "name"`) in the `named_response_policy`
variable, and defining the corresponding zones in `named_zones`.
The TCP ports 80 and 443 are NAT-forwarded by the gateway to 172.30.0.6.
This address was originally occupied by *rprx0.pyrocufflink.blue*, which
operated a reverse proxy for Bitwarden, Gitea, Jenkins, Nextcloud,
OpenVPN, and the public web sites. Now, *web0.pyrocufflink.blue* is
configured to proxy for those services that it does not directly host
itself, effectively making it a web server, a reverse proxy, and a
forward proxy (for OpenVPN only). Thus, it must take over this address
in order to receive forwarded traffic from the Internet.
This commit updates the configuration for *pyrocufflink.net* to use the
wildcard certificate managed by *lego* instead of an unique certificate
managed by *certbot*.
*chmod777.sh* is a simple static website, generated by Hugo. It is
built and published from a Jenkins pipeline, which runs automatically
when new commits are pushed to Gitea.
The HTTPS certificate for this site is signed by Let's Encrypt and
managed by `lego` in the `certs` submodule.
The *nextcloud* role installs Nextcloud from the specified release
archive, downloading it to the control machine first if necessary, and
configures Apache and PHP-FPM to serve it.
The `nextcloud.yml` playbook uses the *cert* role to install the X.509
certificate for the Nextcloud server, sets up Apache HTTPD with the
*apache* role, and installs Nextcloud using the *nextcloud* role.
The host *cloud0.pyrocufflink.blue* is the Nextcloud server for
Pyrocufflink.
*burp1.pyrocufflink.blue* will replace *burp0.pyrocufflink.blue* as the
BURP server for Pyrocufflink. It is a physical machine (Fitlet), making
it simpler to manage the USB drives. The old virtual machine will be
decommissioned soon.
Using the generic *burp.pyrocufflink.blue* name will allow easier
transition to a new BURP server. However, since this is not the actual
name, it cannot be used for task delegation, so a separate variable is
required to store the real name of the BURP server. This is only used
during client deployment, and not by BURP itself.
In order to allow Jenkins to connect to the Docker daemon socket, the
socket must be owned by the *docker* group, and the *jenkins* user must
be a member of it.
*serial0.pyrocufflink.blue* has a manually-configured IP address now, to
ensure it always has an addresss, even if the DHCP server is
unavailable. Recording it here to ensure the address does not
accidentally get reused.
This commit configures *bw0.pyrocufflink.blue* as a BURP client, so that
the Bitwarden data can be backed up. A pre-backup script is used to
take a consistent snapshot of the SQLite database before copying it to
the BURP server.
*dns1.pyrocufflink.blue* has been decommissioned. Having a second DNS
server never really worked correctly for some reason, and the
maintenance overhead of the Raspberry Pi is just not worth it right now.
The DHCP service has been moved to *dns0.pyrocufflink.blue*.
The Zabbix server resolves *localhost* to `::1`, but Postfix resolves it
to `127.0.0.1`. This causes Postfix to reject incoming mail from Zabbix
with "Relay access denied." Explicitly setting the `mynetworks` setting
to include both the IPv4 and IPv6 loopback addresses will ensure that no
mail is rejected from local processes, regardless of how name resolution
happens.