1
0
Fork 0
Commit Graph

9 Commits (32175156ac50ea473a57770bf3c43a7685878981)

Author SHA1 Message Date
Dustin 1392a7c181 jenkins: Add storage for Gentoo Portage/binpkgs
Jenkins that build Gentoo-based systems, like Aimee OS, need a
persistent storage volume for the Gentoo ebuild repository. The Job
initially populates the repository using `emerge-webrsync`, and then the
CronJob keeps it up-to-date by running `emaint sync` daily.

In addition to the Portage repository, we also need a volume to store
built binary packages.  Jenkins job pods can mount this volume to make
binary packages they build available for subsequent runs.

Both of these volumes are exposed to use cases outside the cluster using
`rsync` in daemon mode.  This can be useful for e.g. local builds.
2025-01-09 20:15:46 -06:00
Dustin fa80b15a71 jenkins: Remove Argo CD sync hook
Since Jenkins no longer uses a Longhorn volume, this sync hook is not
useful.
2024-07-04 06:53:58 -05:00
Dustin 7f3287297b jenkins: Migrate to iSCSI persistent volume
Managing the Jenkins volume with Longhorn has become increasingly
problematic.  Because of its large size, whenever Longhorn needs to
rebuild/replicate it (which happens often for no apparent reason), it
can take several hours.  While the synchronization is happening, the
entire cluster suffers from degraded performance.

Instead of using Longhorn, I've decided to try storing the data directly
on the Synology NAS and expose it to Kubernetes via iSCSI.  The Synology
offers many of the same features as Longhorn, including
snapshots/rollbacks and backups.  Using the NAS allows the volume to be
available to any Kubernetes node, without keeping multiple copies of
the data.

In order to expose the iSCSI service on the NAS to the Kubernetes nodes,
I had to make the storage VLAN routable.  I kept it as IPv6-only,
though, as an extra precaution against unauthorized access.  The
firewall only allows nodes on the Kubernetes network to access the NAS
via iSCSI.

I originally tried proxying the iSCSI connection via the VM hosts,
however, this failed because of how iSCSI target discovery works.  The
provided "target host" is really only used to identify available LUNs;
follow-up communication is done with the IP address returned by the
discovery process.  Since the NAS would return its IP address, which
differed from the proxy address, the connection would fail.  Thus, I
resorted to reconfiguring the storage network and connecting directly
to the NAS.

To migrate the contents of the volume, I temporarily created a PVC with
a different name and bound it to the iSCSI PersistentVolume.  Using a
pod with both the original PVC and the new PVC mounted, I used `rsync`
to copy the data.  Once the copy completed, I deleted the Pod and both
PVCs, then created a new PVC with the original name (i.e. `jenkins`),
bound to the iSCSI PV.  While doing this, Longhorn, for some reason,
kept re-creating the PVC whenever I would delete it, no matter how I
requested the deletion.  Deleting the PV, the PVC, or the Volume, using
either the Kubernetes API or the Longhorn UI, they would all get
recreated almost immediately.  Fortunately, there was actually enough of
a delay after deleting it before Longhorn would recreate it that I was
able to create the new PVC manually.  Once I did that, Longhorn seemed
to give up.
2024-06-23 09:53:15 -05:00
Dustin c5188d042b jenkins: Add default imagePullSecrets for jobs
Setting the `imagePullSecrets` property on the default service account
for the *jenkins-jobs* namespace allows jobs to run from private
container images automatically, without additional configuration in the
pipeline definitions.
2023-11-10 15:13:19 -06:00
Dustin 7797da19f9 jenkins: Add Argo CD pre-sync hook
Argo CD will delete and re-create this Job each time it synchronizes the
*jenkins* application.  The job creates a snapshot of the Jenkins volume
using an HTTP request to the Longhorn UI.
2023-10-22 21:50:25 -05:00
Dustin 860bfb1e2c jenkins: Set instance label for Argo CD
Argo CD wants every resource managed by an application to have that
application's name as the value of the `app.kubernetes.io/instance`
label.
2023-10-14 07:24:42 -05:00
Dustin b13479a297 jenkins: Remove dockerconfigjson
This is no longer necessary.
2022-12-28 11:05:40 -06:00
Dustin 10ee364612 jenkins: Add ssh_known_hosts ConfigMap
When cloning/fetching a Git repository in a Jenkins pipeline, the Git
Client plugin uses the configured *Host Key Verification Strategy* to
verify the SSH host key of the remote Git server.  Unfortunately, there
does not seem to be any way to use the configured strategy from the
`git` command line in a Pipeline job, so e.g. `git push` does not
respect it.  This causes jobs to fail to push changes to the remote if
the container they're using does not already have the SSH host key for
the remote in its known hosts database.

This commit adds a ConfigMap to the *jenkins-jobs* namespace that can be
mounted in containers to populate the SSH host key database.
2022-12-10 12:19:33 -06:00
Dustin 404fadc68a jenkins: Run Jenkins in Kubernetes
Running Jenkins in Kubernetes is relatively straightforward.  The
Kubernetes plugin automatically discovers all the connection and
authentication configuration, so a `kubeconfig` file is no longer
necessary.  I did set the *Jenkins tunnel* option, though, so that
agents will connect directly to the Jenkins JNLP port instead of going
through the ingress controller.

Jobs now run in pods in the *jenkins-job* namespace instead of the
*jenkins* namespace.  The latter is now where the Jenkins controller
runs, and the controller should not have permission to modify its own
resources.
2022-11-25 13:38:10 -06:00