Scraping metrics from the Kubernetes API server has started taking 20+
seconds recondly. Until I figure out the underlying cause, I'm
increasing the scrape timeout so that the _vmagent_ doesn't give up and
report the API server as "down."
I've completely blocked all outgoing unencrypted DNS traffic at the
firewall now, which prevents _cert-manager_ from using its default
behavior of using the authoritative name servers for its managed domains
to check poll for ACME challenge DNS TXT record availability.
Fortunately, it has an option to use a recursive resolver (i.e. the
network-provided DNS server) instead.
`mqtt2vl` is a relatively simple service I developed to read log
messages from an MQTT topic (i.e. those published by ESPHome devices)
and stream them to Victoria Logs over HTTPS.
The legacy alerting feature (which we never used) has been deprecated
for a long time and removed in Grafana 11. The corresponding
configuration block must be removed from the config file or Grafana will
not start.
Authelia made breaking changes to the OIDC issuer configuration in 4.39,
specifically around what claims are present in identity tokens. Without
a claims policy set, clients will _not_ get the correct claims, which
breaks authentication and authorization in many cases (including
Kubernetes).
While I was fixing that, I went ahead and fixed a few of the other
deprecation warnings. There are still two that show up at startup, but
fixing them will be a bit more involved, it seems.
This CronJob schedules a periodic run of `restic forget`, which deletes
snapshots according to the specified retention period (14 daily, 4
weekly, 12 monthly).
This task used to run on my workstation, scheduled by a systemd timer
unit. I've kept the same schedule and retention period as before. Now,
instead of relying on my PC to be on and awake, the cleanup will occur
more regularly. There's also the added benefit of getting the logs into
Loki.
Occasionally, some documents may have odd rendering errors that
prevent the archival process from working correctly. I'm less concerned
about the archive document than simply having a centralized storage for
paperwork, so enabling this "continue on soft render error" feature is
appropriate. As far as I can tell, it has no visible effect for the
documents that could not be imported at all without it.
*unifi3.pyrocufflink.blue* has been replaced by
*unifi-nuptials.host.pyrocufflink.black*. The former was the last
Fedora CoreOS machine in use, so the entire Zincati scrape job is no
longer needed.