Since the MinIO server that Restic uses to store snapshots has a
certificate signed by the DCH CA, we need to trust the root certificate
in order to communicate with it. Existing servers already had this CA
trusted by the `pyrocufflink.yml` playbook, but new servers are not
(usually) AD domain members anymore, so we need to be explicit now.
Although running `dnf` from the command line works without explicitly
configuring the proxy, because it inherits the environment variables set
by PAM on login from the user's shell, the `dnf` Ansible module does
not, as it does not inherit those variables. Thus, we need to
explicitly configure the `proxy` setting in `dnf.conf` in order to be
able to install packages via Ansible.
Since `dnf` does not have separate settings for different protocols
(e.g. HTTP, HTTPS, FTP), we need a way to specify which of the
configured proxies to use if there are multiple. As such, the
*useproxy* role will attempt to use the value of the `dnf_proxy`
variable, if it is set, falling back to `yum_proxy` and finally
`http_proxy`. This should cover most situations without any explicit
configuration, but allows flexibility for other cases.
The linuxserver.io Unifi container stored Unifi server and device logs
under `/var/lib/unifi/logs`, while the new container stores them under
`/var/log/unifi`.
The Unifi Network controller runs a syslog server (listening on UDP port
5514) where Unifi devices can send their logs. We need to open the port
in the firewall in order for it to receive log messages and write them
to disk.
I've move the Unifi controller back to running on a Fedora Linux
machine. It therefore needs access to Fedora RPM repositories, as well
as the internal "dch" RPM repository, for system packages.
I also created a new custom container image for the Unifi Network
software (the linuxserver.io one sucks), so the server needs access to
the OCI repo on Gitea.
Some time ago, _libvirt_ was refactored to use separate daemons and
sockets for each of its responsibilities, and the original "monolithic"
`libvirtd` was made obsolete. The Fedora packages have more recently
been adjusted to favor this new approach, and now default to omitting
the monolithic daemon entirely (when `install_weak_deps` is disabled).
One interesting packaging snafu, though, is that without the weak
dependencies, there is _no_ way for clients to connect by default.
Clients run `which virt-ssh-helper` to see if it is installed, which it
is, but `which` is not. They then fall back to running `nc`, which is
_also_ not installed. So even though the tools they actually need are
present, their logic for detecting this is broken. As such, we need to
explicitly install `which` to satisfy them.
Hosts that must use the proxy in order to access the Internet need to
have that configured very early on, before any package installation is
attempted.
The _linuxserver.io_ image for UniFi Network is deprecated. It sucked
anyway. I've created a simple image based on Debian that installs the
_unifi_ package from the upstream apt repository. This image doesn't
require running anything as _root_, so it doesn't need a user namespace.
There are some groups that all hosts should belong to in almost all
cases. Rather than have to remember to add the `--group` arguments for
each of these, the `newvm.sh` script will now enable them by default.
For hosts that should _not_ belong to (at least one of) these groups,
the `--no-default-groups` argument can be provided to suppress that
behavior.
The default groups, initially, are _chrony_ and _collectd_.
I continually struggle with machines' (physical and virtual, even the
Roku devices!) clocks getting out of sync. I have been putting off
fixing this because I wanted to set up a Windows-compatible NTP server
(i.e. on the domain controllers, with Kerberos signing), but there's
really no reason to wait for that to fix the clocks on all the
non-Windows machines, especially since there are exactly 0 Windows
machines on the network right now.
The *chrony* role and corresponding `chrony.yml` playbook are generic,
configured via the `chrony_pools`, `chrony_servers`, and `chrony_allow`
variables. The values for these variables will configure the firewall
to act as an NTP server, synchronizing with the NTP pool on the
Internet, while all other machines will synchronize with it. This
allows machines on networks without Internet access to keep their clocks
in sync.
The point of the `users.yml` playbook is to manage static users for
machines that are not members of the AD domain. Since this playbook is
included in `site.yml`, it gets applied to _all_ machines, even those
that _are_ (or will become) domain members. Thus, we want to avoid
actually doing anything on those machines.
*nut1.pyrocufflink.blue* is a member of the *pyrocufflink.blue* AD
domain. I'm not sure how it got to be so without belonging to the
_pyrocufflink_ Ansible group...
We don't want Jenkins attemptying to manage test VMs. I thought of
various ways to exclude them, but in the end, I think a simple name
match will work fine.
The host provisioner _should_ manage test VMs, though, so it will need
to be configured to set the `PYROCUFFLINK_EXCLUDE_TEST` environment
variable to `false` to override the default behavior.
This commit adds tasks to the `vmhost.yml` playbook to ensure the
*jenkins* user has the Host Provisioner's SSH key in its
`authorized_keys` file. This allows the Host Provisioner to log in and
access the read-only _libvirt_ socket in order to construct the dynamic
Ansible inventory.
The script that runs on first boot of a new machine that triggers
host provisioning can read the name of the configuration policy branch
to checkout from the QEMU firmware configuration option. This commit
adds a `--cfg-branch` argument to `newvm.sh` that sets that value. This
will be useful for testing new policy on a new VM.
This commit adds a new `--group` argument to the `newvm` script, which
adds the host to an Ansible group by listing it in the _libvirt_ domain
metadata. Multiple groups can be specified by repeating the argument.
Additionally, the VM title is now always set to machine's FQDN, which
is what the dynamic inventory plugin uses to determine the inventory
hostname.
The dynamic inventory plugin parses the _libvirt_ domain metadata and
extracts group membership from the `<dch:groups>` XML element. Each
`<dch:group>` sub-element specifies a group to which the host belongs.
Unfortunately, `virt-install` does not support modifying the
`<metadata>` element in the _libvirt_ domain XML document, so we have
to resort to using `virsh`. To ensure the metadata are set before the
guest OS boots and tries to access them, we fork and run `virsh` in
a separate process.
In order to fully automate host provisioning, we need to eliminate the
manual step of adding hosts to the Ansible inventory. Ansible has had
the _community.libvirt.libvirt_ inventory plugin for quite a while, but
by itself it is insufficient, as it has no way to add hosts to groups
dynamically. It does expose the domain XML, but parsing that and
extracting group memberships from that using Jinja templates would be
pretty terrible. Thus, I decided the easiest and most appropriate
option would be to develop my own dynamic inventory plugin.
* Supports multiple _libvirt_ servers
* Can connect to the read-only _libvirt_ socket
* Can optionally exclude VMs that are powered off
* Can exclude VMs based on their operating system (if the _libosinfo_
metadata is specified in the domain metadata)
* Can add hosts to groups as specified in the domain metadata
* Exposes guest info as inventory host variables (requires QEMU guest
agent running in the VM and does not work with a read-only _libvirt_
connection)
The `root_authorized_keys` variable was originally defined only for the
*pyrocufflink* group. This used to effectively be "all" machines, since
everything was a member of the AD domain. Now that we're moving away
from that deployment model, we still want to have the break-glass
option, so we need to define the authorized keys for the _all_ group.
This was the last group that had an entire file encrypted with Ansible
Vault. Now that the Synapse server is long gone, rather than convert it
to having individually-encrypted values, we can get rid of it entirely.
While having a password set for _root_ provides a convenient way of
accessing a machine even if it is not available via SSH, using a static
password in this way is quite insecure and not worth the risk. I may
try to come up with a better way to set a unique password for each
machine eventually, but for now, having this password here is too
dangerous to keep.
The `site.yml` playbook imports all of the other playbooks, providing a
way to deploy _everything_. Normally, this would only be done for a
single host, as part of its initial provisioning, to quickly apply all
common configuration and any application-specific configuration for
whatever roles the host happens to hold.
The `host-setup.yml` playbook provides an entry point for configuring
all common configuration. Basically anything we want to do to _every_
machine, regardless of its location or role.
[ARA Records Ansible][0] is a web-based reporting tool for Ansible. It
consists of a callback plugin that submits task/playbook results to an
HTTP API and a browser GUI to display them.
[0]: https://ara.recordsansible.org/
This role can ensure PostgreSQL users and databases are created for
applications that are not themselves managed by Ansible. Notably, we
need to do this for anything deployed in Kubernetes that uses the
central database server.
Moving the SSH host and user certificate configuration roles out of
`base.yml` into their own playbooks. This will make it easier to deploy
them separately, and target different sets of hosts. The main driver
for this change is the OVH VPS; being external, it cannot communicate
with SSHCA and thus cannot have a signed host certificate. As such, we
do not want to try to configure the SSHCA client on it at all.
This plugin sends a notification using _ntfy_ whenever a playbook
fails. This will be useful especially for automated deployments when
the playbook was not launched manually.
The *k8s-iot-net-ctrl* group is for the Raspberry Pi that has the Zigbee
and Z-Wave controllers connected to it. This node runs the Zigbee2MQTT
and ZWaveJS2MQTT servers as Kubernetes pods.
Since the _hostvds_ group is not defined in the static inventory but by
the OpenStack inventory plugin via `hostvds.openstack.yml`, when the
static inventory is used by itself, Ansible fails to load it with an
error:
> Section [vps:children] includes undefined group: hostvds
To fix this, we could explicitly define an empty _hostvds_ group in the
static inventory, but since we aren't currently running any HostVDS
instances, we might as well just get rid of it.
The _iscsi.socket_ unit gets enabled by default with the
_iscsi-initiator-utils_ package is installed, but it won't start
automatically until the next boot. Without this service running,
Longhorn volumes will not be able to attach to the node, so we need to
explicitly ensure it is running before any workloads are assigned to the
node.
Fedora 41 introduced versioned package names for Kubernetes components,
including CRI-O. The intent is to allow multiple versions of Kubernetes
to be available (but not necessarily installed) within a given Fedora
release. In order to use these packages, we need to set the desired
Kubernetes version, via the new `kubernetes_version` Ansible variable.
At some point, the _qemu-kvm_ package became a meta-package that
installs _everything_ QEMU-related. All drivers, backends, frontends,
etc. get pulled in, which results in a huge amount of wasted space.
Recently, the VM hosts started getting alerts about their `/` filesystem
getting too full, which is how I discovered this.
We can dramatically reduce the disk space footprint by installing only
the "core" package and the drivers we need for our servers.
After making and applying this change, which marks the listed packages
as "leaf" installs, I then manually uninstalled the _qemu-kvm_ package.
This uninstalled everything else that is not specifically listed.
It turns out, $0.99/mo might be _too_ cheap for a cloud server. Running
the Blackbox Exporter+vmagent on the HostVDS instance worked for a few
days, but then it started having frequent timeouts when probing the
websites. I tried redeploying the instance, switching to a larger
instance, and moving it to different networks. Unfortunately, none of
this seemed to help.
Switching over to a VPS running in OVH cloud. OVH VPS servers are
managed statically, as opposed to via API, so we can't use Pulumi to
create them. This one was created for me when I signed up for an OVH
acount.
Using the Ansible OpenStack inventory plugin, we can automatically fetch
information about running instances in HostVDS. We're deriving group
membership from the `groups` metadata tag.
The OpenStack API password must be specified in a `secure.yaml` file.
We're omitting this from the repository because there's no apparent way
to encrypt it.
The inventory plugin tends to prefer IPv6 addresses over IPv4 when
populating `ansible_host`, even if the control machine does not have
IPv6 connectivity. Thus, we have to compose the relevant variables
ourselves with a Jinja2 expression.
HostVDS provides public access to their OpenStack API, which we can use
to manage cloud instances. This particular instance will be used to run
the remote blackbox exporter/vmagent to monitor website availability.
Need to expose Victoria Metrics to the Internet so the `vmagent` process
on the VPS can push the metrics it has scraped from its Blackbox
exporter. Authelia needs to allow access to the `/insert/` paths, of
course.
The _remote-blackbox_ group defines a system that runs
_blackbox-exporter_ and _vmagent_ in a remote (cloud) location. This
system will monitor our public web sites. This will give a better idea
of their availability from the perspective of a user on the Internet,
which can be by factors that are necessarily visible from within the
network.
Like the _blackbox-exporter_ role, the _vmagent_ role now deploys
`vmagent` as a container. This simplifies the process considerably,
eliminating the download/transfer step.
While refactoring this role, I also changed how the trusted CA
certificates are handled. Rather than copy files, the role now expects
a `vmagent_ca_certs` variable. This variable is a mapping of
certificate name (file name without extension) to PEM contents. This
allows certificates to be defined using normal host/group variables.