The *sshd* role can be used to configure the OpenSSH daemon. It supports
configuring a few options globally, as well as a limited set of options
in `Match` blocks (e.g. per-user/group configuration).
The `trustca` role can be used to add CA certificates to the system
trust store. It requires a variable, `ca`, to be defined, referring to
the name of a file containing a CA certificate to install.
The `gitea_ssh_domain` and `gitea_http_domain` variables can be used to
configure the host portion of the URLs for cloning Git repositories over
SSH and HTTPS, respectively. By default, both values are the FQDN of the
machine hosting Gitea.
The *gitea* role installs Gitea using the system package manager and
configures Apache as a reverse proxy for it.
The configuration file requires a number of "secret" values that need to
be unique. These must be specified as Ansible variables:
* `gitea_internal_token`
* `gitea_secret_key`
* `gitea_lfs_jwt_secret`
The `gitea generate` command can be used to create these values.
Normally, Gitea expects to run its own setup tool to generate the
configuration file and create the administrative user. Since the
configuration file is generated from the template instead, no
administrative user is created automatically. Luckily, the `gitea`
command includes a tool to create users, so the administrator can be
created manually, e.g.:
sudo -u gitea gitea admin create-user -c /etc/gitea/app.ini \
--admin
--name giteadmin \
--password giteadmin \
--email giteadmin@example.org
In order to enable LDAPS/STARTTLS support in Samba, the `tls enabled`
option must be set to `yes` and the `tls keyfile` and `tls certfile`
options must be set to the path of the private key and certificate
files, respectively, that Samba will use. The `samba_tls_enabled`,
`samba_tls_keyfile`, and `samb_tls_certfile` Ansible variables can be
used to control these values.
The `socket options` directive does not need to be specified in
`smb.conf`. I think I copied it from an example many years ago, and
never bothered to remove it. It is definitely not required, most likely
not helping performance at all, and most likely hindering it.
This commit adjusts the firewall and networking configuration on dc0 to
host the Pyrocufflink remote access IPsec VPN locally instead of
forwarding it to the internal VPN server.
The *dch-vpn-server* role configures strongSwan to act as an IPsec
responder for `vpn.pyrocufflink.net` and provide an IKEv2/IPsec VPN for
remote access clients, as well as the reverse VPN to FireMon.
The *strongwan* role is intended to be used as a dependency of other
roles that use strongSwan for IPsec configuration. It deploys some basic
configuration and configures the *strongswan* service, but does not
configure any connections, secrets, etc.
Using `state=absent` with the `file` module in a `with_items` loop to
delete the "default" module and site configuration files and the example
certificates is incredibly slow. Especially on the Raspberry Pi, it can
take several minutes to apply this role, even when there are no changes
to make. Using the `command` module and running `rm` to remove these
files, while not as idempotent, is significantly faster. The main
drawback is that each item in the list is not checked, so new items to
remove have to be added to the end of the list instead of in
alphabetical order.
The *freeradius* role is used to install and configure FreeRADIUS. The
configuration system for it is extremely complicated, with dozens of
files in several directories. The default configuration has a plethora
of options enabled that are not needed in most cases, so they are
disabled here. Since the initial (and perhaps only) use case I have for
RADIUS is WiFi authentication via certificates, only the EAP-TLS
mechanism is enabled currently.
The *postfix* role installs and configures the Postfix MTA. It currently
supports a number of modes, including direct transfer and relay. Relay
mode supports STARTTLS security and PLAIN authentication.
Since the location of the configuration drop-in directory can vary by
distribution, it is important to expand the `zbx_agent_config_dir`
variable in the `Include` parameter.
The *zabbix-agent* role installs the Zabbix monitoring agent on the
managed node, and sets it up to communicate with the Zabbix server
specified by the `zabbix_server` variable. This role "should" be
compatible with most distributions; it has been tested with Fedora and
Gentoo.
The *zabbix-server* role deploys the Zabbix server database, daemon, and
web interface. It requires the *apache* role to configure Apache HTTPD
to serve the web UI.
The *apache* role installs and configures the Apache HTTPD server and
its *mod_ssl* module. It currently only works on Fedora/RHEL-based
distributions.
The `ad` identity mapper backend is apparently the only one that can
use shell, home directory, etc. attributes from the directory now (as of
Samba 4.6).
The *ssh-hostkeys* role is used to manage the global SSH host key
database. This file is consulted by the `ssh` command when verifying
remote host keys on first connect. If the host key is found here, it is
copied to the user's host key database file without prompting for
verification.
The *jenkins-slave* role prepares a host to have the Jenkins slave
agent deployed on it. Deploying the agent itself is done by the Jenkins
master, through the web UI.
The service principal name added to `/etc/krb5.keytab` had a trailing
`}` character because of a typo in the Ansible task. This resulted in
GSSAPI authentication failing because server processes could not find
the host key in the key table.
This commit introduces a new role, *hostname*, that is used by the
`hostname.yml` playbook to set the hostname. It also writes
`/etc/hosts` using a template.
It is occasionally necessary to advertise multiple prefixes on the same
interface, particularly when those prefixes are not on-link. The *radvd*
role thus now expects each item in `radvd_interfaces` list to have a
`prefixes` property, which itself is a list of prefixes to advertise.
Prefixes can specify properties such as `on_link`, `autonomous`,
`preferred_lifetime`, etc.
Marking packets matching port-forwarding rules, and then allowing
traffic carrying that mark did not seem to work well. Often, packets
seemed to get dropped for no apparent reason, and outside connections to
NAT'd services was sometimes slow as a result. Explicitly listing every
destination host/port in the `forward` table seems to resolve this
issue.
The *filter* table is responsible for deciding which packets will be
accepted and which will be rejected. It has three chains, which classify
packets according to whether they are destined for the local machine
(input), passing through this machine (forward) or originating from the
local machine (output).
The *dch-gw* role now configures all three chains in this table. For
now, it defines basic rules, mostly based on TCP/UDP destination port:
* Traffic destined for a service hosted by the local machine (DNS, DHCP,
SSH), is allowed if it does not come from the Internet
* Traffic passing through the machine is allowed if:
* It is passing between internal networks
* It is destined for a host on the FireMon network (VPN)
* It was NATed to in internal host (marked 323)
* It is destined for the Internet
* Only DHCP, HTTP, and DNS are allowed to originate from the local
machine
This configuration requires an `internet_iface` variable, which
indicates the name of the network interface connected to the Internet
directly.
`dhcpcd` needs to start after the `network` service has started, as the
latter creates the interfaces to which the former needs to delegate IPv6
prefixes.
The *nftables* role handles installation and basic configuration of the
userspace components for nftables.
Note that this role currently only works on Fedora, and requires
*nftables* 0.8 or later for wildcard includes.
The *networking* service, which is actually a legacy init script, is
provided by the *initscripts* package on RHEL and its derivatives. This
service needs to be running in order for the configuration generated by
the *rhel-network* role to be applied to the managed node.
The `network.yml` playbook is used to configure the network interfaces
on a managed node. Currently, it only supports the Red Hat configuration
style (i.e. `/etc/sysconfig/network-scripts/ifcfg-*` files).