Commit Graph

1130 Commits (2ee86f6344a50fe383b7acccd7f83d3c9a6edd3e)

Author SHA1 Message Date
Dustin a3a2dde6ab callbacks: Add ntfy callback plugin
This plugin sends a notification using _ntfy_ whenever a playbook
fails.  This will be useful especially for automated deployments when
the playbook was not launched manually.
2025-02-01 17:36:58 -06:00
Dustin f705e98fab hosts: Add k8s-iot-net-ctrl group
The *k8s-iot-net-ctrl* group is for the Raspberry Pi that has the Zigbee
and Z-Wave controllers connected to it.  This node runs the Zigbee2MQTT
and ZWaveJS2MQTT servers as Kubernetes pods.
2025-01-31 19:49:51 -06:00
Dustin b1c29fc12a hosts: Remove hostvds group
Since the _hostvds_ group is not defined in the static inventory but by
the OpenStack inventory plugin via `hostvds.openstack.yml`, when the
static inventory is used by itself, Ansible fails to load it with an
error:

> Section [vps:children] includes undefined group: hostvds

To fix this, we could explicitly define an empty _hostvds_ group in the
static inventory, but since we aren't currently running any HostVDS
instances, we might as well just get rid of it.
2025-01-31 19:45:58 -06:00
Dustin 878a099752 r/kubelet: Ensure iscsi service is running
The _iscsi.socket_ unit gets enabled by default with the
_iscsi-initiator-utils_ package is installed, but it won't start
automatically until the next boot.  Without this service running,
Longhorn volumes will not be able to attach to the node, so we need to
explicitly ensure it is running before any workloads are assigned to the
node.
2025-01-31 19:01:27 -06:00
Dustin a9a6a30e59 r/{cri-o,kubelet}: Support versioned packages
Fedora 41 introduced versioned package names for Kubernetes components,
including CRI-O.  The intent is to allow multiple versions of Kubernetes
to be available (but not necessarily installed) within a given Fedora
release.  In order to use these packages, we need to set the desired
Kubernetes version, via the new `kubernetes_version` Ansible variable.
2025-01-31 18:57:21 -06:00
Dustin cbc4d29bd6 r/base: Install python3-libdnf5
Fedora 41 uses _dnf5_ by default.  Being written in C, its Python API is
an optional feature that needs to be installed separately.
2025-01-31 18:55:58 -06:00
Dustin ec4fa25bd8 Merge remote-tracking branch 'refs/remotes/origin/master' 2025-01-30 21:15:40 -06:00
Dustin a58dbb74c5 r/vmhost: Clean up qemu packages
At some point, the _qemu-kvm_ package became a meta-package that
installs _everything_ QEMU-related.  All drivers, backends, frontends,
etc. get pulled in, which results in a huge amount of wasted space.
Recently, the VM hosts started getting alerts about their `/` filesystem
getting too full, which is how I discovered this.

We can dramatically reduce the disk space footprint by installing only
the "core" package and the drivers we need for our servers.

After making and applying this change, which marks the listed packages
as "leaf" installs, I then manually uninstalled the _qemu-kvm_ package.
This uninstalled everything else that is not specifically listed.
2025-01-28 17:36:35 -06:00
Dustin 272e89d65a Merge remote-tracking branch 'refs/remotes/origin/master' 2025-01-28 17:34:37 -06:00
Dustin c00d6f49de hosts: Add OVH VPS
It turns out, $0.99/mo might be _too_ cheap for a cloud server.  Running
the Blackbox Exporter+vmagent on the HostVDS instance worked for a few
days, but then it started having frequent timeouts when probing the
websites.  I tried redeploying the instance, switching to a larger
instance, and moving it to different networks.  Unfortunately, none of
this seemed to help.

Switching over to a VPS running in OVH cloud.  OVH VPS servers are
managed statically, as opposed to via API, so we can't use Pulumi to
create them.  This one was created for me when I signed up for an OVH
acount.
2025-01-26 13:08:59 -06:00
Dustin 33f315334e users: Configure sudo on some machines
`doas` is not available on Alma Linux, so we still have to use `sudo` on
the VPS.
2025-01-26 13:08:59 -06:00
Dustin 319cc80a9f inventory: Configure for HostVDS openstack
Using the Ansible OpenStack inventory plugin, we can automatically fetch
information about running instances in HostVDS.  We're deriving group
membership from the `groups` metadata tag.

The OpenStack API password must be specified in a `secure.yaml` file.
We're omitting this from the repository because there's no apparent way
to encrypt it.

The inventory plugin tends to prefer IPv6 addresses over IPv4 when
populating `ansible_host`, even if the control machine does not have
IPv6 connectivity.  Thus, we have to compose the relevant variables
ourselves with a Jinja2 expression.
2025-01-26 13:08:59 -06:00
Dustin f868cea05c pulumi: Manage HostVDS instances
HostVDS provides public access to their OpenStack API, which we can use
to manage cloud instances.  This particular instance will be used to run
the remote blackbox exporter/vmagent to monitor website availability.
2025-01-26 13:08:59 -06:00
Dustin 304cacb95b dch-proxy: Proxy Victoria Metrics
Need to expose Victoria Metrics to the Internet so the `vmagent` process
on the VPS can push the metrics it has scraped from its Blackbox
exporter.  Authelia needs to allow access to the `/insert/` paths, of
course.
2025-01-26 13:08:59 -06:00
Dustin ad0bd7d4a5 remote-blackbox: Add group
The _remote-blackbox_ group defines a system that runs
_blackbox-exporter_ and _vmagent_ in a remote (cloud) location.  This
system will monitor our public web sites.  This will give a better idea
of their availability from the perspective of a user on the Internet,
which can be by factors that are necessarily visible from within the
network.
2025-01-26 13:08:59 -06:00
Dustin 3e8ac36f88 r/vmagent: Rework as container deployment
Like the _blackbox-exporter_ role, the _vmagent_ role now deploys
`vmagent` as a container.  This simplifies the process considerably,
eliminating the download/transfer step.

While refactoring this role, I also changed how the trusted CA
certificates are handled.  Rather than copy files, the role now expects
a `vmagent_ca_certs` variable.  This variable is a mapping of
certificate name (file name without extension) to PEM contents.  This
allows certificates to be defined using normal host/group variables.
2025-01-26 13:08:59 -06:00
Dustin dcf1e5adfc r/blackbox-exporter: Rework to run as container
Instead of downloading the `blackbox_exporter` binary from GitHub and
copying it to the managed node, the _blackbox-exporter_ role now
installs _podman_ and configures a systemd container unit (Quadlet) to
run it in a container.  This simplifies the deployment considerably, and
will make updating easier (just run the playbook with `-e
blackbox_exporter_pull_image=true`).
2025-01-26 13:06:54 -06:00
Dustin f5bee79bac hosts: Decommission bw0.p.b
Vaultwarden is now hosted in Kubernetes.
2025-01-10 20:09:53 -06:00
Dustin 3ebf91c524 dch-proxy: Update Vaultwarden backend
Vaultwarden is now hosted in Kubernetes.  The old
_bw0.pyrocufflink.blue_ will be decommissioned.
2025-01-10 20:03:35 -06:00
Dustin 81663a654d gw1/squid: Allow to Gitea kicstarts+from p.r
Since the canonical location for Anaconda kickstart scripts is now
Gitea, we need to allow hosts to access them from there.

Also allowing access from the _pyrocufflink.red_ network for e.g.
installation testing.
2024-12-27 13:07:11 -06:00
Dustin e51e933661 r/gitea: Serve kickstarts over HTTP
I want to use Gita as the canonical source for Anaconda kickstart
scripts.  There are certain situations, however, where they cannot be
accessed via HTTPS, such as on a Raspberry Pi without an RTC, since it
cannot validate the certificate without the correct time.  Thus, the
web server must not force an HTTPS redirect for these, but serve them
directly.
2024-12-27 10:51:00 -06:00
Dustin a00ffd10df r/jellyfin: Fix system.xml template whitespace
Jellyfin is one of those stupid programs that thinks it needs to mutate
its own config.  At startup, it apparently reads `system.xml` and then
writes it back out.  When it does this, it trims the final newline from
the file.  Then, the next time Ansible runs, the template rewrites the
file with the trailing newline, and thus determines that the file has
changed and restarts the service.  This cycle has been going on for a
while and is rather annoying.
2024-12-12 06:36:23 -06:00
Dustin 15cb675297 r/kubelet: Pass --config arg to service
The systemd unit configuration installed by Fedora's _kubeadm_ package
does not pass the `--config` argument to the kubelet service.  Without
this argument, the kubelet will not read the configuration file
generated by `kubeadm` from the `kubelet-config` ConfigMap.  Thus,
various features will not work correctly, including server TLS
bootstrap.
2024-12-07 09:35:57 -06:00
Dustin d2e8b9237f Enable doas become plugin for non AD members
The new servers that are not members of the AD domain use `doas` instead
of `sudo`.
2024-11-25 22:01:40 -06:00
Dustin bc7e7c2475 applyConfigPolicy: Configure SSH user certificate
In order to manage servers that are not members of the
_pyrocufflink.blue_ AD domain, Jenkins needs a user certificate signed
by the SSH CA.  Unfortunately, there is not really a good way to get a
certificate issued on demand in a non-interactive way, as SSHCA relies
on OIDC ID tokens which are issued by Authelia, and Authelica requires
browser-based interactive login and consent.  Until I can come up with a
better option, I've manually signed a certificate for Jenkins to use.

The Jenkins SSH Credentials plugin does not support certificates
directly, so in order to use one, we have to explicitly configure `ssh`
to load it via the `CertificateFile` option.
2024-11-25 21:17:44 -06:00
Dustin d993d59bee Deploy new Kubernetes nodes
The *stor-* nodes are dedicated to Longhorn replicas.  The other nodes
handle general workloads.
2024-11-24 10:33:21 -06:00
Dustin e41b6a619e newvm: Add domain argument
Now that we have multiple domains (_pyrocufflink.blue_ for AD domain
members and _pyrocufflink.black_ for the new machines), we need a way to
specify the domain for new machines when they are created.  Thus, the
`newvm.sh` script accepts either an FQDN or a `--domain` argument.  The
DHCP server will register the DNS name in the zone containing the
machine's domain name.
2024-11-24 10:33:21 -06:00
Dustin 7a5f01f8a3 r/doas: Configure sudo alternative
In the spirit of replacing bloated tools with unnecessary functionality
with smaller, more focused alternatives, we can use `doas` instead of
`sudo`.  Originally, it was a BSD tool, but the Linux port supports PAM,
so we can still use `pam_auth_ssh_agent` for ppasswordless
authentication.
2024-11-24 10:33:21 -06:00
Dustin c95a96a33c users: Manage static user accounts
The Samba AD domain performs two important functions: centralized user
identity mapping via LDAP, and centralized authentication via
Kerberos/GSSAPI.  Unfortunately, Samba, on both domain controllers and
members, is quite frustrating.  The client, _winbind_, frequently just
stops working and needs to have its cache flushed in order to resolve
user IDs again.  It also takes quite a lot of memory, something rather
precious on Raspberry Pis.  The DC is also somewhat flaky at times, and
cumbersome to upgrade.  In short, I really would like to get rid of as
much of it as possible.

For most use cases, OIDC can replace Kereros.  For SSH specifically, we
can use SSH certificates (which are issued to OIDC tokens).
Unfortunately, user and group accounts still need ID numbers assigned,
which is what _winbind_ does.  In reality, there's only one user that's
necessary: _dustin_.  It doesn't make sense to bring along all the
baggage of Samba just to map that one account.  Instead, it's a lot
simpler and more robust to create it statically.
2024-11-24 10:33:21 -06:00
Dustin 0f600b9e6e kubernetes: Manage worker nodes
So far, I have been managing Kubernetes worker nodes with Fedora CoreOS
Ignition, but I have decided to move everything back to Fedora and
Ansible.  I like the idea of an immutable operating system, but the FCOS
implementation is not really what I want.  I like the automated updates,
but that can be accomplished with _dnf-automatic_.  I do _not_ like
giving up control of when to upgrade to the next Fedora release.
Mostly, I never did come up with a good way to manage application-level
configuration on FCOS machines.  None of my experiments (Cue+tmpl,
KCL+etcd+Luci) were successful, which mostly resulted in my manually
managing configuration on nodes individually.  Managing OS-level
configuration is also rather cumbersome, since it requires redeploying
the machine entirely.  Altogether, I just don't think FCOS fits with my
model of managing systems.

This commit introduces a new playbook, `kubernetes.yml`, and a handful of
new roles to manage Kubernetes worker nodes running Fedora Linux.  It
also adds two new deploy scripts, `k8s-worker.sh` and `k8s-longhorn.sh`,
which fully automate the process of bringing up worker nodes.
2024-11-24 10:33:21 -06:00
Dustin 164f3b5e0f r/wal-g-pg: Handle versioned storage locations
The target location for WAL archives and backups saved by WAL-G should
be separated based on the major version of PostgreSQL with which they
are compatible.  This will make it easier to restore those backups,
since they can only be restored into a cluster of the same version.

Unfortunately, WAL-G does not natively handle this.  In fact, it doesn't
really have any way of knowing the version of the PostgreSQL server it
is backing up, at least when it is uploading WAL archives.  Thus, we
have to include the version number in the target path (S3 prefix)
manually.  We can't rely on Ansible to do this, because there is no way
to ensure Ansible runs at the appropriate point during the upgrade
process.  As such, we need to be able to modify the target location as
part of the upgrade, without causing a conflict with Ansible the next
time it runs.

To that end, I've changed how the _wal-g-pg_ role creates the
configuration file for WAL-G.  Instead of rendering directly to
`wal-g.yml`, the role renders a template, `wal-g.yml.in`.  This template
can include a `@PGVERSION@` specifier.  The `wal-g-config` script will
then use `sed` to replace that specifier with the version of PostgreSQL
installed on the server, rendering the final `wal-g.yml`.  This script
is called both by Ansible in a handler after generating the template
configuration, and also as a post-upgrade action by the
`postgresql-upgrade` script.

I originally wanted the `wal-g-config` script to use the version of
PostgreSQL specified in the `PG_VERSION` file within the data directory.
This would ensure that WAL-G always uploads/downloads files for the
matching version.  Unfortunately, this introduced a dependency conflict:
the WAL-G configuration needs to be present before a backup can be
restored, but the data directory is empty until after the backup has
been restored.  Thus, we have to use the installed server version,
rather than the data directory version.  This leaves a small window
where WAL-G may be configured to point to the wrong target if the
`postgresql-upgrade` script fails and thus does not trigger regenerating
the configuration file.  This could result in new WAL archives/backups
being uploaded to the old target location.  These files would be
incompatible with the other files in that location, and could
potentially overwrite existing files.  This is rather unlikely, since
the PostgreSQL server will not start if the _postgresql-upgrade.service_
failed.  The only time it should be possible is if the upgrade fails in
such a way that it leaves an empty but valid data directory, and then
the machine is rebooted.
2024-11-17 10:27:31 -06:00
Dustin e861883627 r/pgsql-server-base: Add post-upgrade capability
The `postgresql-upgrade` script will now run any executables located in
the `/etc/postgresql/post-upgrade.d` directory.  This will allow making
arbitrary changes to the system after a PostgreSQL major version
upgrade.  Notably, we will use this capability to change the WAL-G
configuration to upload WAL archives and backups to the correct
version-specific location.
2024-11-17 10:27:31 -06:00
Dustin 965742d2b0 r/postgresql-server-base: Factor out prep steps
There's a bit of a dependency loop between the _postgresql-server_ role
and other roles that supplement it, like _wal-g-pg_ and
_postgresql-cert_.  The latter roles need PostgreSQL installed, but when
those roles are used, the server cannot be started until they have been
applied.  To resolve this situation, I've broken out the initial
installation steps from the _postgresql-server_ role into
_postgresql-server-base_.  Roles that need PostgreSQL installed, but
need to be applied before the server can start, can depend on this role.
2024-11-17 10:27:31 -06:00
Dustin 53b39338dd r/postgresql-server: Add script to upgrade database
The `postgresql-upgrade.sh` script arranges to run `pg_upgrade` after a
major PostgreSQL version update.  It's scheduled by a systemd unit,
_postgresql-upgrade.service_, which runs only after an OS update.
2024-11-17 10:27:31 -06:00
Dustin 0048a87630 r/postgresql-server: Set become on postgres tasks
Tasks that must run as the _postgres_ user need to explicity enable
`become`, in case it is not already enabled at the playbook level.  This
can happen, for example, when the playbook is running directly as root.
2024-11-16 11:50:28 -06:00
Dustin 2d5f9e66c1 chromie: Scrape logs from serial consoles
Now that we have the serial terminal server managing `picocom` processes
for each serial port, and those `picocom` processes are configured to
log console output to files, we can configure Promtail to scrape these
log files and send them to Loki.
2024-11-10 18:34:49 -06:00
Dustin a82700a257 chromie: Configure serial terminal server 2024-11-10 13:15:08 -06:00
Dustin 6115762847 r/serterm: Deploy serial terminal multiplexer
Using `tmux`, we can spawn a bunch of `picocom` processes for the serial
ports connected to other server's console ports.  The
_serial-terminal-server_ service manages the `tmux` server process,
while the individual _serial-terminal-server-window@.service_ units
create a window in the `tmux` session.

The serial terminal server runs as a dedicated user.  The SSH server is
configured to force this user to connect to the `tmux` session.  This
should help ensure the serial consoles are accessible, even if the
Active Directory server is unavailable.
2024-11-10 13:15:08 -06:00
Dustin 8b9cf1985a r/wal-g-pg: Schedule weekly delete jobs
WAL-G slows down significantly when too many backups are kept.  We need
to periodically clean up old backups to maintain a reasonable level of
performance, and also keep from wasting space with useless old backups.
2024-11-05 19:28:57 -06:00
Dustin eaf9cbef9a Merge remote-tracking branch 'origin/frigate-exporter' 2024-11-05 07:01:31 -06:00
Dustin c1dc52ac29 Merge branch 'loki' 2024-11-05 07:01:13 -06:00
Dustin 39d9985fbd r/loki-caddy: Caddy reverse proxy for Loki
Caddy handles TLS termination for Loki, automatically requesting and
renewing its certificate via ACME.
2024-11-05 06:54:27 -06:00
Dustin 010f652060 hosts: Add loki1.p.b
_loki1.pyrocufflink.blue_ replaces _loki0.pyrocufflink.blue_.  The
former runs Fedora Linux and is managed by Ansible, while the latter ran
Fedora CoreOS and was managed by Ignition and _cfg_.
2024-11-05 06:54:27 -06:00
Dustin abfd35a68e raid-array: Create udev rules to auto re-add disks
This udev rule will automatically re-add disks to the RAID array when
they are connected.  `mdadm --udev-rules` is supposed to be able to
generate such a rule based on the `POLICY` definitions in
`/etc/mdadm.conf`, but I was not able to get that to work; it always
printed an empty rule file, no matter what I put in `mdadm.conf`.
2024-11-05 06:52:20 -06:00
Dustin 168bfee911 r/webites: Add apps.du5t1n.xyz F-Droid repo
I want to publish the _20125_ Status application to an F-Droid
repository to make it easy for Tabitha to install and update.  F-Droid
repositories are similar to other package repositories: a collection of
packages and some metadata files.  Although there is a fully-fledged
server-side software package that can manage F-Droid repositories, it's
not required: the metadata files can be pre-generated and then hosted by
a static web server just fine.

This commit adds configuration for the web server and reverse proxy to
host the F-Droid repository at _apps.du5t1n.xyz_.
2024-11-05 06:47:02 -06:00
Dustin 7e8aee072e r/bitwarden_rs: Redirect to canonical host name
Bitwarden has not worked correctly for clients using the non-canonical
domain name (i.e. _bitwarden.pyrocufflink.blue_) for quite some time.
This still trips me up occasionally, though, so hopefully adding a
server-side redirect will help.  Eventually, I'll probably remove the
non-canonical name entirely.
2024-11-05 06:37:03 -06:00
Dustin 0807afde57 r/dch-proxy: Use separate sockets for HTTP v4/v6
Although listening on only an IPv6 socket works fine for the HTTP
front-end, it results in HAProxy logging client requests as IPv4-mapped
IPv6 addresses.  For visual processing, this is ok, but it breaks Loki's
`ip` filter.
2024-11-05 06:34:55 -06:00
Dustin 90351ce59e r/dch-proxy: Include host name in log messages
When troubleshooting configuration or connection issues, it will be
helpful to have the value of the HTTP Host header present in log
messages emitted by HAProxy.  This will help reason about HAProxy's
routing decisions.

For TLS connections, of course, we don't have access to the Host header,
but we can use the value of the TLS SNI field.  Note that the requisite
`content set-var` directive MUST come before the `content accept`;
HAProxy stops processing all `tcp-request content ...` directives once
it has encountered a decision.
2024-11-05 06:32:49 -06:00
Dustin 370a1df7ac dch-proxy: Proxy for dynk8s-provisioner
The reverse proxy needs to handle traffic for the _dynk8s-provisioner_
in order for the ephemeral Jenkins worker nodes in the cloud to work
properly.
2024-11-05 06:30:02 -06:00
Dustin 3ca94d2bf4 r/haproxy: Enable Prometheus metrics
HAProxy can export stats in Prometheus format, but this requires
special configuration of a dedicated front-end.  To support this, the
_haproxy_ Ansible role now has a pair of variables,
`haproxy_enable_stats` and `haproxy_stats_port`, which control whether
or not the stats front-end is enabled, and if so, what port it listens
on.  Note that on Fedora with the default SELinux policy, the port must
be labelled either `http_port_t` or `http_cache_port_t`.
2024-11-05 06:23:49 -06:00