Without `disableNameSuffixHash` enabled, Kustomize will create a unique
ConfigMap any time the contents of source file change. It will also
update any Deployment, StatefulSet, etc resources to point to the new
ConfigMap. This has the effect of restarting any pods that refer to the
ConfigMap whenever its contents change.
I had avoided using this initially because Kustomize does *not* delete
previous ConfigMap resources whenever it creates a new one. Now that we
have Argo CD, though, this is not an issue, as it will clean up the old
resources whenever it synchronizes.
[Argo CD] is a Kubernetes-native GitOps/continuous deployment manager.
It monitors the state of Kubnernetes resources, such as Pods,
Deployments, ConfigMaps, Secrets, and Custom Resources, and synchronizes
them with their canonical definitions from a Git repository.
*Argo CD* consists of various components, including a Repository
Service, an Application Controller, a Notification Controller, and an
API server/Web UI. It also has some optional components, such as a
bundled Dex server for authentication/authorization, and an
ApplicationSet controller, which we will not be using.
[Argo CD]: https://argo-cd.readthedocs.io/
[Firefly III][0] is a free and open source, web-based personal finance
management application. It features a double-entry bookkeeping system
for tracking transactions, plus other classification options like
budgets, categories, and tags. It has a rule engine that can
automatically manipulate transactions, plus several other really useful
features.
The application itself is mostly standard browser-based GUI written in
PHP. There is an official container image, though it is not
particularly well designed and must be run as root (it does drop
privileges before launching the actual application, thankfully). I may
decide to create a better image later.
Along with the main application, there is a separate tool for importing
transactions from a CSV file. Its design is rather interesting: though
it is a web-based application, it does not have any authentication or
user management, but uses a user API key to access the main Firefly III
application. This effectively requires us to have one instance of the
importer per user. While not ideal, it isn't particularly problematic
since there are only two of us (and Tabitha may not even end up using
it; she seems to like YNAB).
[0]: https://www.firefly-iii.org/
This configuration is for the instance of MinIO running on the BURP
server, which will be used to store PostgreSQL backups created by the
Postgres Operator.
By default, Authelia requires the user to explicitly consent to allow
an application access to personal information *every time the user
authenticates*. This is rather annoying, so luckily, it provides a
way to remember the consent for a period of time.
For convenience, clients on the internal network do not need to
authenticate in order to access *scanserv-js*. There isn't anything
particularly sensitive about this application, anyway.
Enabling OpenID Connect authentication for the Kubernetes API server
will allow clients, particularly `kubectl` to log in without needing
TLS certificates and private keys.
Authelia can act as an Open ID Connect identity provider. This allows
it to provide authentication/authorization for other applications
besides those inside the Kubernetes cluster using it for Ingress
authentication.
To start with, we'll configure an OIDC client for Jenkins.
I am not entirely sure why, but it seems like the Kubelet *always*
misses the first check in the readiness probe. This causes a full
60-second delay before the Authelia pod is marked as "ready," even
though it was actually ready within a second of the container starting.
To avoid this very long delay, during which Authelia is unreachable,
even though it is working fine, we can add a startup probe with a much
shorter check interval. The kubelet will not start readiness probes
until the startup probe returns successfully, so it won't miss the first
one any more.
Authelia is a general authentication provider that works (primarily)
by integrating with *nginx* using its subrequest mechanism. It works
great with Kubernetes/*ingress-nginx* to provide authentication for
services running in the cluster, especially those that do not provide
their own authentication system.
Authelia needs a database to store session data. It supports various
engines, but since we're only running a very small instance with no real
need for HA, SQLite on a Longhorn persistent volume is sufficient.
Configuration is done mostly through a YAML document, although some
secret values are stored in separate files, which are pointed to by
environment variables.