Provide and deploy Let's encrypt certificates via Nginx
2021-11-08
I use Let’s encrypt to obtain valid TLS cerftificates for various services including mail. Each service has its own virtual machine or is a seperate physical machine. While having only one public IP address I use nginx as reverse proxy and TLS termination point for traffic running over HTTP/HTTPS. Since this machine is the only one accessible from the world wide web it also handles the certificate request for all other machines which need valid TLS certificates. After some web research I noticed that there is no common way for distributing certificates across a network of machines. So the following is my way to handle this problem. Comments with improvements or recommendations for other ways are highly welcome.
Providing certificates
I use certbot to request new certificates. The possible ways to get a new one are well described within the manual
so I jump straight to providing certificates to other machines. There are multiple possible ways to get the certificates
to other machines. On way is to use ssh
to copy the certificates directly to other machines but not all of my machines are
running ssh
. Another way is to use the nginx to provide the encrypted certificates and keys via HTTPS within the local network.
I wrote a small script which parses a simple configuration, encrpyts the selected certificates and copies them within the document root
of some nginx virtual host. The script resides as nn_cert_provisioning
within /usr/local/bin/
.
|
|
The script assumes that a folder ~/certs
exists within your home directory which contains a passfile
with the shared password for encrpytion. Otherwise, the
script creates a random password. Now, you link every certificate in the directory which shall be distributed, for example ln -s /etc/letsencrypt/live/mail.domain.com ~/certs/mail.domain.com
.
The script follows every link in the certs
directory and creates a tar archive which gets encrpyted with openssl and the password from the passfile
.
This is copied to the document root used for provisioning and nginx gets informed about the new files.
To automate the process one can use either the certbot
deploy hook which runs after every successfull renew of a certificate or
with a timer. I use systemd-timers
for easier monitoring with monit
and sending mails with the result to my account.
A systemd-timer
is a special systemd unit which runs on specific time points and triggers normal systemd service units.
It is recommended that the timer and the service unit have the same name. Systemd provides extensive
documentation via man systemd.service
and man systemd.timer
such that I don’t explain to much on that.
First create the service file /etc/systemd/system/nn_provision_certs.service
with the following content:
[Unit]
Description=Provision Let's encrypt certifictes via nginx
[Service]
Type=oneshot
ExecStart=/usr/local/bin/nn_provision_cert
Environment=HOME='/root'
[Install]
WantedBy=default.target
Adjust the Environment
if your home directory is not root. To test the new unit file run systemctl daemon-reload && systemctl start nn_provision_certs.service
.
Now create the timer file /etc/systemd/system/nn_provision_certs.timer
with the following content:
[Unit]
Description=Provision certificates once a day
[Timer]
# Run daily at 2 am
OnCalendar=*-*-* 02:00:00
# Execute if job was missed due to machine was offline
Persistent=true
[Install]
WantedBy=timers.target
This timer runs every day at 2 AM and will be made up when the machine was turned off. Enable the timer with sysemctl daemon-reload
,
systemctl enable --now nn_provisioning_certs.timer
. You can check that the timer is active with systemctl list-timers
.
Notification time
This runs the certificate provisioning every day but if something happens it remains undetected. This can be changed fairly easy by
sending a mail if the job fails. I use msmtp to send mails from a machine to my mail server. The setup is straight foreward and
well described within the Debian and Arch Linux wiki. After msmtp works and can send mails create a new script /usr/local/bin/nn_systemd_mail
with the following content and make it executable:
|
|
This script can be used to send informations to about failed units to some recipient.
To be used within systemd we need another service file /etc/systemd/system/status_mail@.service
:
[Unit]
Description=status email for %i to the admins
[Service]
Type=oneshot
ExecStart=/usr/local/bin/nn_systemd_mail admins@domain.de %i
User=nobody
Group=systemd-journal
This unit file can now send mails for every other unit file. You can test the unit with something like systemctl daemon-reload && systemctl start status_mail@dbus.service
.
You should get a mail with the otput of systemctl status --full dbus.service
. Now, attach this file to the provisioning unit file by adding the line
OnFailure=status_mail@%n.service
to the [Unit]
section. The %n
passes the name of the failed unit file to the mail service.
Now, the provisioning of Let’s encrypt certificates is mostly done. The backend automatically renews the certificates (thanks to certbot
), the nn_provision_certs
script
encrypts the certificates and moves them to the correct place. Also, if some error happens we get a mail.
Make the certificates accessible
At a last step on the provisioning side we need to inform nginx that it provides our certificates for the internal network. I took the virtual host configuration of a simple proxy and added the following lines to the configuration:
# More configuration for the virtual host
# certificate deployment here
location /deploy {
alias /var/www/deploy/;
# allow head requests from all networks
# and the other requests only from the server internal network
limit_except HEAD {
allow <internal network ip>/24;
deny all;
}
}
# More configuration for the virtual host
This inserts a new location to the virtual host which is called /deploy
and the document root is an alias path to /var/www/deploy/
which is used by the
provisioning script. The limit_except HEAD
directive informs the webserver that only the internal network is allowed and otherwise only HEAD requests are allowed.
Now, you can use curl
or some other command to download the certificates within your internal network.
On the target machine
With this setup every machine that needs a Let’s encrypt certificate can just download it and deploy it to the correct place. The following script is fairly complicated but does more than downloading the certificates.
|
|
The first part defines the configuration directory and pass file as well as the download url. Adjust these settings to your needs and copy the password defined on your provisioning machine to match the one used by the client.
|
|
The next part is only a utility function which prints for each certificate the valid until
date.
|
|
The last part is quite a bit longer but the option parsing should be familiar. The update
function starts with a trap which removes the temporary directory at the end or if the script
is interrupted. Now to the hot part. Likewise to the provisioning script this one uses the ~/certs
folder and takes every file execept the passfile
as domain name. So to add a domain
to get deployed just run touch ~/certs/<domain.name>
. The script tries to download the new certificate and checks if the file is properly encrypted. Afterwards, it tries to decrypt the file
within a temp folder and checks if the key and fullchain differs from the one installed locally. If so the fullchain and key gets rotated such that the current and last key and chain remains
on the machine. If everything runs well the mail services or something else gets informed to load the new certificates.
I use systemd-timers
again to run the script on a daily basis and the mail mechanic to get informed if some error happens.
Conclusion
Okay, this post got longer than expected and contains a lot more code than I expected. I’m running this system since half a year without complications or certificate problems. Feel free to leave me comment about your implementations or improvements to my solution.