The Raspberry Pi CM4 nodes on the DeskPi Super 6c cluster board are
members of the _cm4-k8s-node_ group. This group is a child of
_k8s-node_ which overrides the data volume configuration and node
labels.
The `datavol.yml` playbook can now create LVM volume groups and logical
volumes. This will be useful for physical hosts with static storage.
LVM LVs and VGs are defined using the `logical_volumes` Ansible
variable, which contains a mapping of VG names to their properties.
Each VG must have two properties: `pvs`, which is a list of LVM physical
volumes to add to the VG, and `lvs`, a list of LVs and their properties,
including `name` and `size. For example:
```yaml
logical_volumes:
kubernetes:
pvs:
- /dev/nvme0n1
lvs:
- name: containers
size: 64G
- name: kubelet
size: 32G
```
The _apps.du5t1n.xyz_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
Apache supports fetching server certificates via ACME (e.g. from Let's
Encrypt) using a new module called _mod_md_. Configuring the module is
fairly straightforward, mostly consisting of `MDomain` directives that
indicate what certificates to request. Unfortunately, there is one
rather annoying quirk: the certificates it obtains are not immediately
available to use, and the server must be reloaded in order to start
using them. Fortunately, the module provides a notification mechanism
via the `MDNotifyCmd` directive, which will run the specified command
after obtaining a certificate. The command is executed with the
privileges of the web server, which does not have permission to reload
itself, so we have to build in some indirection in order to trigger the
reload: the notification runs a script that creates an empty file in the
server's state directory; systemd is watching for that file to be
created, then starts another service unit to trigger the actual reload,
then removes trigger file.
Website roles, etc. that want to switch to using _mod_md_ to manage
their certificates should depend on this role and add an `MDomain`
directive to their Apache configuration file fragments.
We don't want the iSCSI and NFS client tools to be installed on control
plane nodes. Let's move this task to the _k8s-worker_ role so it will
only apply to worker nodes.
Since the _haproxy_ role relies on other roles to provide drop-in
configuration files for actual proxy configuration, we cannot start the
service in the base role. If there are any issues with the drop-in
files that are added later, the service will not be able to start,
causing the playbook to fail and thus never be able to update the broken
configuration. The dependent roles need to be responsible for starting
the service once they have put their configuration files in place.
The _haproxy_ role only installs HAProxy and provides some basic global
configuration; it expects another role to depend on it and provide
concrete proxy configuration with drop-in configuration files. Thus, we
need a role specifically for the Kubernetes control plane nodes to
provide the configuration to proxy for the API server.
Control plane nodes will now run _keepalived_, to provide a "floating"
IP address that is assigned to one of the nodes at a time. This
address (172.30.0.169) is now the target of the DNS A record for
_kubernetes.pyrocufflink.blue_, so clients will always communicate with
the server that currently holds the floating address, whichever that may
be.
I was originally inspired by the official Kubernetes [High Availability
Considerations][0] document when designing this. At first, I planned to
deploy _keepalived_ and HAProxy as DaemonSets on the control plane
nodes, but this ended up being somewhat problematic whenever all of the
control plane nodes would go down at once, as the _keepalived_ and
HAProxy pods would not get scheduled and thus no clients communicate
with the API servers.
[0]: 9d7cfab6fe/docs/ha-considerations.md
[keepalived][0] is a free implementation of the Virtual Router
Redundancy Protocol (VRRP), which is a simple method for automatically
assigning an IP address to one of several potential hosts based on
certain criteria. It is particularly useful in conjunction with a load
balancer like HAProxy, to provide layer 3 redundancy in addition to
layer 7. We will use it for both the reverse proxy for the public
websites and the Kubernetes API server.
[0]: https://www.keepalived.org/
The `kubernetes.yml` playbook now applies the _kubelet_ role to hosts in
the _k8s-controller_ group. This will prepare them to join the cluster
as control plane nodes, but will not actually add them to the cluster.
I didn't realize this playbook wasn't even in the Git repository when I
added it to `site.yml`.
This playbook manages the `scrape-collectd` ConfigMap, which is used by
Victoria Metrics to identify the hosts it should scrape to retrieve
metrics from _collectd_.
Machines that are not part of the Kubernetes cluster, need to be
explicitly listed in this ConfigMap in order for Victoria Metrics to
scrape collectd metrics from them.
The man page for _containers-certs.d(5)_ says that subdirectories of
`/etc/containers/certs.d` should be named `host:port`, however, this is
a bit misleading. It seems instead, the directory name must match the
name of the registry server as specified, so in the case of a server
that supports HTTPS on port 443, where the port would be omitted from
the image name, it must also be omitted from the `certs.d` subdirectory
name.
`/etc/containers/registries.conf.d` is distinct from
`/etc/containers/registries.d`. The latter contains YAML files relating
to image signatures, while the former contains TOML files relating to
registry locations.
It turns out _nginx_ has a built-in default value for `access_log` and
`error_log`, even if they are omitted from the configuration file. To
actually disable writing logs to a file, we need to explicitly specify
`off`.
If `virt-install` fails before the VM starts for the first time; the
`virsh event` process running in the background will never terminate and
therefore the main process will `wait` forever. We can avoid this by
killing the background process if `virt-install` fails.
Although the `newvm.sh` script had a `--vcpus` argument, its value was
never being used.
The `--cpu host` argument for `virt-install` is deprecated in favor of
`--cpu host`.
VMs don't really need graphical consoles; serial terminals are good
enough, or even better given that they are logged. For the few cases
where a graphical console is actually necessary, the `newvm.sh` script
can add one with the `--graphics` argument.
Since the kickstart scripts are now generated from templates by Jenkins,
we need to fetch the final rendered artifacts from the PXE server,
rather than the source files from Gitea.
One major weakness with Ansible's "lookup" plugins is that they are
evaluated _every single time they are used_, even indirectly. This
means, for example, a shell command could be run many times, potentially
resulting in different values, or executing a complex calculation that
always provides the same result. Ansible does not have a built-in way
to cache the result of a `lookup` or `query` call, so I created this
one. It's inspired by [ansible-cached-lookup][0], which didn't actually
work and is apparently unmaintained. Instead of using a hard-coded
file-based caching system, however, my plugin uses Ansible's
configuration and plugin infrastructure to store values with any
available cache plugin.
Although looking up the _pyrocufflink.net_ wildcard certificate with the
Kubernetes API isn't particularly expensive by itself right now, I can
envision several other uses that may be. Having this plugin available
could speed up future playbooks.
[0]: https://pypi.org/project/ansible-cached-lookup
Using files for certificates and private keys is less than ideal.
The only way to "share" a certificate between multiple hosts is with
symbolic links, which means the configuration policy has to be prepared
for each managed system. As we're moving toward a much more dynamic
environment, this becomes problematic; the host-provisioner will never
be able to copy a certificate to a new host that was just created.
Further, I have never really liked the idea of storing certificates and
private keys in Git anyway, even if it is in a submodule with limited
access.
Now that we're serving kickstart files from the PXE server, we need to
have a correctly-configured HTTPD server, with valid HTTPS certificates,
running there.
The _containers-image_ role configures _containers-registries.conf(5)_ and
_containers-cert.d(5)_, which are used by CRI-O (and `podman`).
Specifically, we'll use these to redirect requests for images on Docker
Hub (docker.io) to the internal caching proxy.
Docker Hub's rate limits are so low now that they've started to affect
my home lab. Deploying a caching proxy and directing all pull requests
through it should prevent exceeding the limit. It will also help
prevent containers from starting if access to the Internet is down, as
long as their images have been cached recently.
The *lego-nginx* role automates obtaining certificates for *nginx* via
ACME using `lego`. It generates a shell script with the appropriate
arguments for `lego run`, runs it once to obtain a certificate
initially, then schedules it to run periodically via a systemd timer
unit. Using `lego`'s "hook" capability, the script signals the `nginx`
server process to reload. This uses `doas` for now, but could be
adapted easily to use `sudo`, if the need ever arises.
The `host-setup.yml` playbook now imports the `datavol.yml` playbook, so
that new machines (particularly those provisioned by the
host-provisioner) get their data volumes formatted and mounted
automatically.
Since we use the proxy when PXE booting to speed up Live OS image and
RPM package downloads, we need to allow machines using it to access the
kickstart files which are now hosted on the PXE server. Virtual
machines on the Kubernetes network (_pyrocufflink.black_ also need
access to those kickstarts, so we need to mark that subnet as trusted.
Now that kickstart scripts are generated from templates by a Jenkins
job, they need to be stored somewhere besides Gitea. It makes sense to
serve them from the PXE server, since it's involved in the installation
process anyway (at least for physical machines). Thus, we need a path
where the generated files can be uploaded by Jenkins and served by
Apache.
The version of Samba in Fedora 42 has got some really weird bugs. In
this case, it seems `net ads kerberos kinit -P` no longer works. It
prints a vague `NT_STATUS_INTERNAL_ERROR` message, with no other
indication of what went wrong. Fortunately, it's still possible to get
a ticket-granting ticket for the machine account using the host keytab.
The _nginx_ access log files are absolutely spammed with requets from
Restic and WAL-G, to the point where they fill the log volume on
_chromie_ every day. They're not particularly useful anyway; I've never
looked at them, and any information they contain can be obtained in
another way, if necessary, for troubleshooting.
We don't want `podman` pulling a new container image and updating
without our concent. The image will already be there on the first
start, since we pulled it in an Ansible task.
The `:Z` flag tells the container runtime to run `chcon` recursively on
the specified path, in order to ensure that the files are accessible
inside the container. For a very large volume like the MinIO storage
directory, this can take an extremely long time. It's really only
necessary on the first startup anyway, because the context won't change
after that. To avoid spending a bunch of time, we can set the context
correctly when we create the directory, and then not worry about it
after that.
Using the Kubernetes API to create bootstrap tokens makes it possible
for the host-provisioner to automatically add new machines to the
Kubernetes cluster. The host provisioner cannot connect to existing
machines, and thus cannot run the `kubeadm token create` command on
a control plane node. With the appropriate permissions assigned to the
service account associated with the pod it runs in, though, it can
directly create the secret via the API.
There are actually two pieces of information required for a node to
join a cluster, though: a bootstrap token and the CA certificate. When
using the `kubeadm token create` command to issue a bootstrap token, it
also provides (a hash of) the CA certificate with the command it prints.
When creating the token manually, we need an alternative method for
obtaining and distributing the CA certificate, so we use the
`cluster-info` ConfigMap. This contains a stub `kubeconfig` file, which
includes the CA certificate, which can be used by the `kubeadm join`
command with a join configuration file. Generating both of these files
may be a bit more involved than computing the CA certificate hash and
passing that on the command line, but there are a couple of advantages.
First, it's more extensible, as the join configuration file can specify
additional configuration for the node (which we may want to use later).
It's also somewhat more secure, since the token is not passed as a
command-line argument.
Interestingly, the most difficult part of this implementation was
getting the expiration timestamp. Ansible exposes very little date math
capability; notably lacking is the ability to construct a `timedelta`
object, so the only way to get a timestamp in the future is to convert
the `datetime` object returned by `now` to a Unix timestamp and add some
number of seconds to it. Further, there is no direct way to get a
`datetime` object from the computed Unix timestamp value, but we can
rely on the fact that Python class methods can be called on instances,
too, so `now().fromtimestamp()` works the same as
`datetime.fromtimestamp()`.
I did something stupid to this machine trying to clear up its
`/var/lib/containers/storage` volume and now it won't start any new
pods. Killing it and replacing.
Having the VM hosts as members of the domain has been troublesome since
the very beginning. In full shutdown events, it's often difficult or
impossible to log in to the VM hosts while the domain controller VMs are
down or still coming up, even with winbind caching.
Now that we have the `users.yml` playbook, the SSH certificate
authority, and `doas`+*pam_ssh_agent_auth*, we really don't need the AD
domain for centralized authentication.
To ensure the `users.yml` playbook is idempotent in cases where the
users it manages are also managed by other playbooks, we have to set
`append: true`. This prevents the managed user(s) from being removed
from additional groups other playbooks may have added them to.