By default, strongSwan will only attempt key negotiation once and then
give up. If the VPN connection is closed because of a network issue, it
is unlikely that a single attempt to reconnect will work, so let's keep
trying until it succeeds.
The *motioneye* role installs motionEye on a Fedora machine using `pip`.
It configures Apache to proxy for motionEye for outside (HTTPS) access.
The official installation instructions and default configuration for
motionEye assume it will be running as root. There is, however, no
specific reason for this, as it works just fine as an unprivileged user.
The only minor surprise is that the `conf_path` configuration setting
must be writable, as this is where motionEye places generated
configuration for `motion`. This path does not, however, have to
include the `motioneye.conf` file itself, which can still be read-only.
This commit adds a new playbook, `protonvpn.yml`, and its supporting
roles *strongswan-swanctl* and *protonvpn*. This playbook configures
strongSwan to connect to ProtonVPN using IPsec/IKEv2.
With this playbook, we configure the name servers on the Pyrocufflink
network to route all DNS requests through the Cloudflare public DNS
recursive servers at 1.1.1.1/1.0.0.1 over ProtonVPN. Using this setup,
we have the benefit of the speed of using a public DNS server (which is
*significantly* faster than running our own recursive server, usually by
1-2 seconds per request), and the benefit of anonymity from ProtonVPN.
Using the public DNS server alone is great for performance, but allows
the server operator (in this case Cloudflare) to track and analyze usage
patterns. Using ProtonVPN gives us anonymity (assuming we trust
ProtonVPN not to do the very same tracking), but can have a negative
performance impact if its used for all Internet traffic. By combining
these solutions, we can get the benefits of both!
This commit adds two new variables to the *named* role:
`named_queries_syslog` and `named_rpz_syslog`. These variables control
whether BIND will send query and RPZ log messages to the local syslog
daemon, respectively.
BIND response policy zones (RPZ) support provides a mechanism for
overriding the responses to DNS queries based on a wide range of
criteria. In the simplest form, a response policy zone can be used to
provide different responses to different clients, or "block" some DNS
names.
For the Pyrocufflink and related networks, I plan to use an RPZ to
implement ad/tracker blocking. The goal will be to generate an RPZ
definition from a collection of host lists (e.g. those used by uBlock
Origin) periodically.
This commit introduces basic support for RPZ configuration in the
*named* role. It can be activated by providing a list of "response
policy" definitions (e.g. `zone "name"`) in the `named_response_policy`
variable, and defining the corresponding zones in `named_zones`.
Normally, Home Assistant uses a SQLite database for storing state
history. On a Raspberry Pi with only an SD card for storage like
*hass1.pyrocufflink.blue*, this can become extremely slow, especially
for large data sets. To speed up features like history and logbook,
Home Assistant supports using an external database engine such as
PostgreSQL or MariaDB.
The *hassdb* role and corresponding `hassdb.yml` playbook deploys a
PostgreSQL server for Home Assistant to use. It needs only to create
the role and database, as Home Assistant manages its own schema.
The *postgresql-setup* service is no longer necessary, as upstream has
fixed the SELinux policy to allow root to invoke the `postgresql-setup`
command directly.
This commit adds a task to generate a PostgreSQL configuration file from
a template. Previously, the default configuration file generated by
`initdb` was sufficient, but in order to enable SSL connections, some
changes to it are required.
Naturally, SSL connections require a server certificate, so the
*postgresql-server* role will now also copy certificate files to the
managed node, if any.
Fedora has renamed the *strongswan* service to *strongswan-starter*.
The *strongswan* service now controls strongSwan via Vici, which uses a
different configuration format and is not compatible with the files in
`/etc/strongswan/ipsec.d`. As I am migrating everything to Wireguard
now, it does not make sense to rewrite all of the IPsec configuration in
this new format, so using the legacy format with the renamed service
makes more sense.
Because the Home Assistant user's home directory is on `/var`, Python
packages installed in the "user site" do not get the correct SELinux
labels and thus run in the wrong domain. This causes a lot of AVC
denials and other issues that prevent Home Assistant from working
correctly.
To resolve this issue, Home Assistant is now installed in a virtual
environment at `/usr/local/homeassistant`. This directory is still
owned by the Home Assistant user, allowing Home Assistant to manage
packages installed there. Since it is rooted under `/usr`, files are
labelled correctly and processes launched from executables there will
run in the correct domain.
The main network, *pyrocufflink.blue* (172.30.0.0/26) is now on VLAN 1
instead of VLAN 30. This changed when I replaced the Cisco SG200-26
with the UniFI Switch 48, to simplify configuration of all of the
Ubiquiti devices.
For some time, I have been trying to design a new configuration for the
reverse proxy on port 443 to correctly handle all the types of traffic
on that port. In the original implementation, all traffic on port 443
was forwarded by the gateway to HAProxy. HAproxy then used TLS SNI to
route connections to the correct backend server based the requested host
name. This allowed both HTTPS and OpenVPN-over-TLS to use the same
port, however it was not without issues. A layer 4 (TCP) proxy like
this "hides" the real source address of clients connecting to the
backend, which makes IP-based security (e.g. rate limiting, blacklists,
etc.) impossible at the application level. In particular, Nextcloud,
which implements rate limiting was constantly imposing login delays on
all users, because legitimate traffic was indistinguishable from
Internet background noise.
To alleviate these issues, I needed to change the proxy to operate in
layer 7 (HTTP) mode, so that headers like *X-Forwarded-For* and
*X-Forwarded-Host* could be added. Unfortunately, this was not easy,
because of the simultaneous requirement to forward OpenVPN traffic.
HAProxy can only do SNI inspection in TCP mode. So, I began looking for
an alternate way to proxy both HTTP and non-HTTP traffic on the same
port.
The HTTP protocol defines the `CONNECT` method, which is used by forward
proxies to tunnel HTTPS over plain HTTP. OpenVPN clients support
tunneling OpenVPN over HTTP using this method as well. HAProxy has
limited support for the CONNECT method (i.e. it doesn't do DNS
resolution, and I could find no way of restricting the destination) with
the `http_proxy` option, so I looked for alternate proxy servers that
had more complete support. Unsurprisingly, Apache HTTPD has the most
complete implementation of the `CONNECT` method (Nginx doesn't support
it at all). Using a name-based virtual host on port 443, Apache will
accept requests for *vpn.pyrocufflink.net* (using TLS SNI) and allow the
clients to use the `CONNECT` method to create a tunnel to the OpenVPN
server. This requires OpenVPN clients to a) use *stunnel* to wrap plain
HTTP proxy connections in TLS and b) configure OpenVPN to use the
TLS-wrapped HTTP proxy.
With Apache accepting all incoming connections, it was trivial to also
configure it as a layer 7 forward proxy for Bitwarden, Gitea, Jenkins,
and Nextcloud. Unfortunately, proxying for the other websites
(darkchestofwonders.us, chmod777.sh, dustin.hatch.name) was not quite as
straightforward. These websites would need to have an internal name
that differed from their external name, and thus a certificate valid for
that name. Rather than reconfigure all of these sites and set all of
that up, I decided to just move the responsibility for handling direct
connections from outside to the *web0* and eliminate the dedicated
reverse proxy. This was not possible before, because Apache could not
forward the OpenVPN traffic directly, but now with the forward proxy
configuration, there is no reason to have a separate server for these
connections.
Overall, I am pleased with how this turned out. It makes the OpenVPN
configuration simpler (*stunnel* no longer needs to run on the OpenVPN
server itself, since Apache is handling TLS termination), eliminates a
network hop for the websites, makes the reverse proxy configuration for
the other web applications much easier to understand, and resolves the
original issue of losing client connection information.
A name-based HTTP (not HTTPS) virtual host for *pyrocufflink.net* is
necessary to ensure requests are handled properly, now that there is
another HTTP virtual host (chmod777.sh) defined on the same server.
This commit updates the configuration for *pyrocufflink.net* to use the
wildcard certificate managed by *lego* instead of an unique certificate
managed by *certbot*.
*chmod777.sh* is a simple static website, generated by Hugo. It is
built and published from a Jenkins pipeline, which runs automatically
when new commits are pushed to Gitea.
The HTTPS certificate for this site is signed by Let's Encrypt and
managed by `lego` in the `certs` submodule.
This commit adds front-end and back-end configuration for HAProxy to
proxy HTTP/HTTPS for
*nextcloud.pyrocufflink.net*/*nextcloud.pyrocufflink.blue* to
*cloud0.pyrocufflink.blue*.
The *nextcloud* role installs Nextcloud from the specified release
archive, downloading it to the control machine first if necessary, and
configures Apache and PHP-FPM to serve it.
The `nextcloud.yml` playbook uses the *cert* role to install the X.509
certificate for the Nextcloud server, sets up Apache HTTPD with the
*apache* role, and installs Nextcloud using the *nextcloud* role.
The host *cloud0.pyrocufflink.blue* is the Nextcloud server for
Pyrocufflink.
The *cert* role is intended to be a generic, reusable role to copy an
X.509 certificate and/or private key file to managed nodes. It is
intended to be included in a playbook with at least the `cert_src` and
`cert_dest` variables defined, e.g.:
```
- hosts: whatever
roles:
- role: cert
cert_src: whatever.cer
cert_dest: /path/to/whatever.cer
```
the `haproxy_ssl_default_bind_options` variable is not defined for
machines running Fedora, because this parameter is not used in the
default configuration file there.
I seem to have forgotten how I got the RPM for Gitea. I think I built
it, but I cannot find the spec file, nor the RPM package. Since this is
clearly not reproducible, I decided to switch to using the binary
provided by upstream for now, until either I or Fedora get around to
making a better RPM.
Installing Gitea from the upstream binary is simple: just download it
and copy it to `/usr/local/bin`. Of course, the OS user and systemd
unit have to be managed by configuration policy when it's installed this
way.
*burp1.pyrocufflink.blue* will replace *burp0.pyrocufflink.blue* as the
BURP server for Pyrocufflink. It is a physical machine (Fitlet), making
it simpler to manage the USB drives. The old virtual machine will be
decommissioned soon.
Ansible replaced the `version_compare` filter with a `version_compare`
test that does the same thing. The former is completely gone now,
causing the template to fail to render, so its usage of that filter
needs to be updated.
Using the generic *burp.pyrocufflink.blue* name will allow easier
transition to a new BURP server. However, since this is not the actual
name, it cannot be used for task delegation, so a separate variable is
required to store the real name of the BURP server. This is only used
during client deployment, and not by BURP itself.
The *graylog* role installs Graylog from the *graylog2.org* Yum
repository and manages basic server configuration. It augments the
default systemd unit to provide the `CAP_NET_BIND_SERVICE` capability to
the Graylog server process via ambient capabilities, thereby allowing
the server to bind to the privileged Syslog UDP port.
The `Alias` configuration for Certbot needs to be configured before any
other locations, to ensure the `/.well-known` path is always served from
the local filesystem. If another drop-in configuration file (e.g.
`bitwarden.conf`) is ordered before it, it may override this
configuration and prevent Let's Encrypt from working.
In order to allow Jenkins to connect to the Docker daemon socket, the
socket must be owned by the *docker* group, and the *jenkins* user must
be a member of it.
This commit adds an HAProxy backend for Bitwarden, and adds ACL rules to
the frontend to proxy traffic to *bitwarden.pyrocufflink.blue* or
*bitwarden.pyrocufflink.net* to it.
Since the same certificate is used for LDAPS and RADIUS (EAP-TLS), it
makes more sense to store it only once, with the later file as a symlink
to the former.
This commit configures *bw0.pyrocufflink.blue* as a BURP client, so that
the Bitwarden data can be backed up. A pre-backup script is used to
take a consistent snapshot of the SQLite database before copying it to
the BURP server.
The BURP server runs as user *burp*, and nas such, requires that the
client-specific configuration files be owned by that user so they can be
read when a client connects.
Newer versions of the BURP client require `status_port` to be set. This
commit updates the `burp.conf.j2` template to more closely match the
default configuration shipped with the *burp* package, including setting
this new value.
Newer versions of Gitea need a JWT secret for Oauth2. Gitea will
attempt to generate one at startup if it is not already specified in the
configuration file, but this will fail since the file is not writable by
the user running the service. As such, it must be set via configuration
policy.
The point of the "wheel host" is to serve as a repository of Python
packages (wheels) built by Jenkins for consumption by `pip` et al. For
applications and libraries that do not provide all of their dependencies
as binary packages, this makes a convenient way to install them without
requiring all of the build tools and dependencies on the destination
machine.
The idea here is that a Jenkins job runs `pip wheel` for a distribution
package name or `requirements.txt` file and then uploads the resulting
wheel files using `rsync`. Apache is configured to serve the upload
directory with an index compatible with `pip`'s `--find-links`.
The *hass-dhcp* role installs dnsmasq and configures it to serve DHCP
requests on the Home Assistant network. Since this network is not
routed, the regular DHCP relay/server setup will not work.
This commit adds a systemd unit to enable the Kernel Same-page Merging
daemon on VM hosts. This allows much greater virtual machine density,
especially when many VMs are running the same guest OS.
Debian does not support system-wide SSL cipher suite profiles of course,
so these options need to be specified explicitly when deploying HAProxy
on Debian-based machines.
This commit updates the net-ifaces scripts for both *vmhost0* and
*vmhost1* to create VLAN and bridge interfaces for the Management and
Home Assistant networks.
The *taiga* role installs the three components of Taiga:
* taiga-back
* taiga-events
* taiga-front
*taiga-back* is a Python application. Its dependencies are installed via
`pip` in the *taiga* user's site-packages, and the application itself is
installed by unpacking the archive. *taiga-events* is a Node.js
application. Its dependencies are installed by `npm`, and is itself
installed by unpacking the archive. Finally, *taiga-front* is a
single-page browser application that is installed by unpacking the
archive, and served by Apache.
Taiga requires PostgreSQL and RabbitMQ.
The *websites/pyrocufflink.net* role configures the public web server to
host *pyrocufflink.net*. This site has two functions:
* It redirects `/` to http://dustin.hatch.name/
* It proxies user home directories (i.e. /~dustin/) to the file server
The `ServerName` directive needs to be set inside the default SSL vhost,
as this property does not get inherited from the global configuration,
and it is needs to be set in order for SNI to work correctly.
By default, per-user directories (i.e. `/~username/`) are disabled in
Fedora's configuration of Apache. This commit introduces a new variable,
`apache_userdir`, which can be used to enable this feature. It should be
set to a string other than *disabled*, which is the path under users'
home directories that will be served, if it is accessible. Normally, the
value would be `public_html`.
To support multiple websites with separate Let's Encrypt certificates,
the *certbot* role needs to be applied as a dependency of each
individual website role. This will allow each application to specify a
different value for `certbot_domains`.
The default systemd unit configuration for *certbot-renew.service* runs
the `certbot renew …` command as root. This can cause permissions
issues, since this Ansible role expects the *certbot* user to be able to
access all configuration, data, and log files. As such, this commit adds
a systemd unit extension for *certbot-renew.service* to run the command
as *certbot*.
Moving the route definitions to global scope, and defining an address
pool, will allow other clients besides *dhatch-d4b* to connect to and
use the OpenVPN tunnel service. This may be useful in situations where
IPsec is blocked
The VPN capability of the UniFi Security Gateway is extremely limited.
It does not support road-warrior IPsec/IKEv2 configuration, and its
OpenVPN configuration is inflexible. As with DHCP, the best solution is
to simply move service to another machine.
To that end, I created a new VM, *vpn0.pyrocufflink.blue*, to host both
strongSwan and OpenVPN. For this to work, the necessary TCP/UDP ports
need to be forwarded, of course, and all of the remote subnets need
static routes on the gateway, specifying this machine as the next hop.
Additionally, ICMP redirects need to be disabled, to prevent confusing
the routing tables of devices on the same subnet as the VPN gateway.
The routes to FireMon networks are now defined using the
`firemon_networks` Ansible variable. The global `iroute` and
client-specific `route` options are generated from the CIDR blocks
specified in this list.
The *aria2* role installs the *aria2* download manager and sets it up to
run as a system service with RPC enabled. It also sets up the web UI,
though that must be installed manually from an archive, for now.
For hosts that do not have any TSIG keys, the `named_keys` variable
still must be defined (to an empty iterable) in order for the template
to expand properly.
To avoid having a single point of failure, a second recursive DNS server
is necessary. This will be useful in cases where the VM hosts must both
be taken offline, but Internet access is still required.
The new server, *dns1.pyrocufflink.blue*, has all the same zones defined
as the original. It forwards the *pyrocufflink.blue* zone and
corresponding reverse zones to the domain controllers, and acts as a
slave for the *pyrocufflink.red* zone.
*smtp1.pyrocufflink.blue* is a VM that will replace
*smtp0.pyrocufflink.blue*, a Raspberry Pi.
I decided that there is little use in having the availability guarantee of
a discreet machine for the SMTP relay. The only system that would NEED
to send mail if the VM host fails is Zabbix, which operates as its own
relay anyway. As such, the main relay can be a VM, and the Raspberry Pi
can be repurposed as a recursive DNS server.
The *koji-web* role installs and configures the Koji Web GUI front-end
for Koji. It requires Apache and mod_wsgi. A client certificate is
required for authentication to the hub, and must be placed in the
host-specific subdirectory of `certs/koji`.
The *koji-client* role is a generic role that can be used to configure
the Koji client library/`koji` CLI tool. By default, it manages the
default configuration at `/etc/koji`, but by using the
`koji_client_dir`, `koji_client_user`, and `koji_client_id` variables,
it can be used to configure per-user client configuration as well.
The *kojira* role sets up the Koji repository agent to manage
repository metadata for build tags. It runs as a daemon, usually on the
same machine as the Koji hub. A client certificate is required for
authentication, and must be supplied by placing it in the
`certs/koji/{{ inventory_hostname }}` directory.
The *koji-gc* role sets up the Koji garbage collector utility to run
periodically. It uses cron for scheduling. A client certificate is
required for authentication, and must be supplied by placing it in the
`certs/koji/{{ inventory_hostname }}` directory.
The minimal Fedora installation does not include a cron implementation.
The *cronie* role can be applied to hosts installed in this way to
ensure that cron is available for task scheduling.
The *burp-client* role installs and configures a BURP client. It should
support RHEL/CentOS/Fedora and Gentoo.
To manage the client password and other server-mandated configuration,
the role uses Ansible's delegation feature to generate a configuration
file in the "clientconfdir" on the BURP server.
An hourly cron task is scheduled that runs `burp -a t` every hour. This
allows the server to configure backup timebands and intervals.
The *burp-server* role installs and configures a BURP server. It is
adapted from a previous iteration, and should support CentOS/RHEL/Fedora
and Gentoo, as well as both BURP 1.x and 2.x (depending on which version
gets installed by the system package manager).
To manage the certificate authority, the *burp-server* role uses the
`burp_ca` command. This has the advantage of not requiring any external
certificate management, but effectively binds the CA to a specific
machine.
The value of the `shlib_directory` is dependent the system architecture.
Specifically, x86_64 machines use `/usr/lib64/postfix`, while everything
else uses `/usr/lib/postfix`. This role was originally deployed on a
Raspberry Pi, so the original path was correct. Attempting to deploy it
on an x86_64 machine revealed the error.
This commit adds a new task that loads a variables file based on the
architecture. Each option defines an `arch_libdir` variable, which can
be expanded in the `postfix_shlib_directory` variable as needed.
*myala.pyrocufflink.jazz* no longer hosts any public-facing websites,
and is in fact shut down. To prevent HAproxy from failing to start
because it cannot resolve the name, this backend needs to be removed.
The *fileserver* role configures Samba as a file sharing server. It uses
the *samba* role to handle cross-distribution installation of Samba
itself, and is focused primarily on configuring shared folders.
This commit adds an *after* ordering dependency on the network device
unit to the *wait-global-address@.service* template unit. Without this
dependency, the service will wait forever for a global address if the
device does not exist. With the dependency, though, if the device does
not appear within the default timeout, the wait service will never
start, causing all dependent services to fail, but allowing the boot
process to continue.
The *websites/darkchestofwonders.us* role prepares a machine to host
http://darkchestofwonders.us/. The website itself is published via rsync
by Jenkins.
The *websites/dustin.hatch.name* role configures a server to host
http://dustin.hatch.name/. The role only applies basic configuration;
the actual website application is published by Jenkins.
The Apache service needs to be reloaded after the *certbot* role
configures it to serve the `/.well-known/acme-challenge` path, so that
the changes can take effect before the `certbot` command is run to
request the certificate(s).
If another role that depends on the *apache* role accidentally creates
an invalid configuration, it will be impossible to correct it by
subsequent invocations of its playbook. This is because the *apache*
role always tries to start the service, which will fail if the
configuration is invalid, thus aborting the playbook. With this early
abort, there is no way for later tasks to correct the error.
Playbooks that include the *apache* role should have a task that is
executed after all the roles have been applied to ensure the service is
running.
The *dch-storage-net* role configures a machine to connect the storage
network and mount shared folders from the storage appliance.
The `wait-global-address.sh` script and corresponding
*wait-global-address@.service* systemd unit template are necessary to
ensure that the storage network is actually available before attempting
to mount the shared volumes. This is particularly important at boot,
since `dhcpcd` does not implement any kind of signaling that can be used
by *network-online.target*, so the network is considered "online" as
soon as the `dhcpcd` process has started. This typically results in
"network unreachable" errors.
The *net-ifaces* role manages a script that creates virtual network
interfaces, such as bridge, bond, and VLAN, that `dhcpcd`/`dhclient`
alone cannot. This provides a lightweight alternative to
*systemd-networkd* and *NetworkMangager*.
Though the default for the `fqdn` value is listed as `both` in
*dhcpcd.conf(5)*, the current behavior of `dhcpcd` suggests that it may
actually be `none`. Without explicitly setting `fqdn both`, the value of
the kernel node name is sent as-is in the *hostname* option (12). If the
node name is set to the FQDN, then dynamic DNS gets broken, since the
DHCP server always appends its domain name to the provided hostname.
Setting `fqdn both` causes `dhcpcd` to send the FQDN in the *FQDN*
option (81), which the DHCP server interprets correctly.
Using a list to specify the values for the `allowinterfaces` and
`denyinterfaces` parameters in `dhcpcd.conf` makes the configuration
policy cleaner and more type-safe.
Today I realized that `dhcpcd` has been logging several hundred thousand
of these messages every second:
libudev: received NULL device
This was causing both `dhcpcd` and `systemd-journald` to consume 100%
CPU.
I am not entirely sure what a "device management" module is in the
context of `dhcpcd`, but it does not seem to be required. Setting the
`nodev` option in `dhcpcd.conf` suppresses the messages, and seems to
have no effect on the operation of the daemon.
Traffic from the management network is not allowed except for specific
services. NTP is required of course, for time synchronization with the
pyrocufflink.blue domain controllers. RADIUS is necessary for WiFi
authentication, which is also handled by the DCs.
The `ifconfig` global directive specifies the IP address added to the
tunnel interface device, not the network. The `push route` directives
need to include this address to correctly send route information to
clients.
The *dch-openvpn-server* role installs and configures OpenVPN and
stunnel to provide both native OpenVPN service as well as
OpenVPN-over-TLS. The latter uses stunnel, listening on TCP port 9876,
to allow better firewall traversal and TCP port sharing via reverse
proxy.
The `apache_server_tokens` variable can now be set, which controls the
value of the `ServerTokens` directive. If the variable is set, the
`ServerTokens` directive will be added to the `00-servername.conf` file.
The `samba_interfaces` variable can now be defined to populate the
`interfaces` global configuration parameter in `smb.conf`. This
parameter controls the interfaces or addresses to which the Samba server
binds, and also the IP addresses that are registered in DNS.
The *certbot* role now supports copying the data for an existing Let's
Encrypt account to the managed node using an archive. If an archive
named for the inventory hostname (typically the FQDN) of the managed
node is found in the `accounts` directory under the `files` directory of
the *certbot* role, it will be copied to the managed node and extracted
at `/var/lib/letsencrypt/accounts`. This takes the place of running
`certbot register` to sign up for a new account.
The *install* tag is applied to any task that installs a package.
The *user* tag is applied to any task that creates an OS user or group.
The *group* tag is applied to any task that creates an OS user group.
For machines that do not use firewalld, the *zabbix-agent* role will now
skip attempting to open the Zabbix agent port using the `firewalld`
module. The `host_uses_firewalld` variable controls this behavior.
The *certbot* role installs and configures the `certbot` ACME client. It
adjusts the default configuration to allow the tool to run as an
unprivileged user, and then configures Apache to work with the *webroot*
plugin. It registers for an account and requests a certificate for the
domains specified by the `certbot_domains` Ansible variable. Finally, it
enables the *certbot-renew.timer* systemd unit to schedule automatic
renewal of all Let's Encrypt certificates.
The *dch-proxy* role sets up HAProxy to provide a revers proxy for all
public-facing web services on the Pyrocufflink network. It uses the TLS
Server Name Indication (SNI) extension to determine the proper backend
server based on the name requested by the client.
For now, only Gitea is configured; the name *git.pyrocufflink.blue* is
proxied to *git0.pyrocufflink.blue*. All other names are proxied to
Myala.
The *haproxy* installs HAproxy and sets up basic configuration for it.
It configures the systemd unit to launch the service with the `-f
/etc/haproxy` arguments, which will cause it to load all files from the
`/etc/haproxy` directory, instead of just `/etc/haproxy/haproxy.cfg`.
This will allow other roles to add frontend and backend configuration by
adding additional files to this directory.
The *sshd* role can be used to configure the OpenSSH daemon. It supports
configuring a few options globally, as well as a limited set of options
in `Match` blocks (e.g. per-user/group configuration).
The `trustca` role can be used to add CA certificates to the system
trust store. It requires a variable, `ca`, to be defined, referring to
the name of a file containing a CA certificate to install.
The `gitea_ssh_domain` and `gitea_http_domain` variables can be used to
configure the host portion of the URLs for cloning Git repositories over
SSH and HTTPS, respectively. By default, both values are the FQDN of the
machine hosting Gitea.
The *gitea* role installs Gitea using the system package manager and
configures Apache as a reverse proxy for it.
The configuration file requires a number of "secret" values that need to
be unique. These must be specified as Ansible variables:
* `gitea_internal_token`
* `gitea_secret_key`
* `gitea_lfs_jwt_secret`
The `gitea generate` command can be used to create these values.
Normally, Gitea expects to run its own setup tool to generate the
configuration file and create the administrative user. Since the
configuration file is generated from the template instead, no
administrative user is created automatically. Luckily, the `gitea`
command includes a tool to create users, so the administrator can be
created manually, e.g.:
sudo -u gitea gitea admin create-user -c /etc/gitea/app.ini \
--admin
--name giteadmin \
--password giteadmin \
--email giteadmin@example.org