Setting up a dockerised Caddy-based webserver on Hetzner Cloud
This blog—along with a number of other websites I run—had been hosted by Dreamhost since 2008. But last year, when they announced they’d now have to charge VAT on top of the $13/mth I was already paying them, I figured it was time to shop around for alternatives.
It wasn’t just about price either – I was also increasingly uncomfortable with having my data stored in the USA, and as my needs had progressed beyond just basic PHP hosting to building Jekyll sites and deploying with Git hooks, Dreamhost’s “not quite a VPS” basic tier became more and more awkward to work with.
I settled on Hetzner Cloud’s CX22 vCPU as a suitable alternative – as long as your processing or bandwidth requirements aren’t significant, they just can’t be beaten on price. With VAT and an (optional extra!) IP4 address, it costs me £4/mth – cheaper than a Digital Ocean Small Droplet or a Mythic Beasts VPS 1, but with four times the RAM and twice the storage. Wow. I even snagged a €20 voucher on the Hetzner Community site, which effectively gave me the first four months for free.
Dreamhost also used to handle the DNS for some of my domains, so I needed to find a replacement for that too. I went with Cloudflare. I used Luar Roji’s Dreamhost DNS exporter to save me a half an hour’s work copying and pasting between the two sites.
Setting up the VPS
The Hetzner Cloud setup wizard makes it super easy to boot and configure a new VPS. You pick a distribution (I chose Ubuntu, as I’m most familiar with that), upload an SSH public key for the root
user, and boom, you’re ready to SSH in.
You’ll then want to do basic setup and security – see some examples here, here, and here. In my case:
Set up a non-root user account
adduser zarino
usermod -aG sudo zarino
su zarino
cd /home/zarino
mkdir .ssh
chmod 700 .ssh
Then I copied my SSH public key to /home/zarino/.ssh/authorized_keys
on the remote server, with ssh-copy-id zarino@<hetzner-ip-address>
from my Mac (because I already had ssh-copy-id installed via Homebrew). I guess you could copy/paste your SSH key in by hand.
Tighten login requirements
Secure your logins by editing /etc/ssh/sshd_config
:
- Uncomment the
#PermitRootLogin…
line and changeprohibit-password
tono
- Uncomment the
#PasswordAuthentication…
line and changeyes
tono
- Change
UsePAM yes
toUsePAM no
- Confirm the following are set (or the default):
ChallengeResponseAuthentication no
(was replaced byKbdInteractiveAuthentication
in Ubuntu 22.04)KerberosAuthentication no
GSSAPIAuthentication no
X11Forwarding no
PermitUserEnvironment no
DebianBanner no
Then validate the syntax of sshd_config
with sudo sshd -t
. Assuming it’s fine, restart SSH with sudo systemctl restart ssh
. Then, in a new terminal on your local machine (without closing your current SSH session in the original terminal – just in case!):
- Confirm
ssh root@<hetzner-ip-address>
is refused - Confirm
ssh -o PubkeyAuthentication=no zarino@<hetzner-ip-address>
is refused - Confirm
ssh zarino@<hetzner-ip-address>
is accepted
You can now do the rest of the setup in that new, non-root user account.
Set up firewall
With logins secured, it’s time to set up Ubuntu’s firewall:
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
sudo ufw enable
sudo ufw status
will show you that ports 22, 80, and 443 are allowed. Everything else is denied.
Update system packages
I also updated system packages and removed a few packages I knew I wouldn’t need (unused packages are just a potential source of vulnerabilities!), although I think in the end all of the packages had never been installed in the first place:
sudo apt update
sudo apt full-upgrade -y
sudo apt autoremove -y
sudo apt-get purge --auto-remove telnetd ftp vsftpd samba nfs-kernel-server nfs-common
Set up unattended upgrades
sudo apt install unattended-upgrades
systemctl enable unattended-upgrades
systemctl start unattended-upgrades
Then confim /etc/apt/apt.conf.d/20auto-upgrades
contains:
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";
And perform a dry run with sudo unattended-upgrades --dry-run --debug
.
You could also enable email notifications about security updates, by adding the following two lines to /etc/apt/apt.conf.d/50unattended-upgrades
:
Unattended-Upgrade::Mail "your@email.com";
Unattended-Upgrade::MailOnlyOnError "true";
Set up Fail2Ban
Fail2Ban automatically blocks further connection attempts from IP addresses that fail authentication more than a given number of times. It’s useful for reducing spam in your logs from bots that just try (and fail) to connect to your SSH port again and again.
sudo apt install fail2ban
The fail2ban
systemd service will automatically be enabled and started (see sudo systemctl status fail2ban.service
).
And create a file at /etc/fail2ban/jail.local
with content like this (substituting in values that make sense in your situation):
[DEFAULT]
bantime = 60m
findtime = 60m
maxretry = 3
destemail = you@example.com
sender = fail2ban@example.com
Note: The default fail2ban config will look for failures in the standard log file locations like /var/log/auth.log
for SSH. Modern versions of Ubuntu write their logs to the systemd journal (accessible via journald
) but if you have the rsyslog
service running (which is enabled and started by default in Ubuntu 24.04) it will replicate the logs to the standard file locations, meaning fail2ban will still be able to find them. You can check whether rsyslog is running with sudo systemctl status rsyslog
, and if it’s not running, you might want to set backend = systemd
in your jail.local
so that fail2ban knows to check the systemd journal for failed authentication attempts.
You can test your fail2ban config with:
sudo fail2ban-server -t
If you see an error about allowipv6
not being defined, you can either ignore it (fail2ban will try to automatically work out whether your server has an IPv6 address and will ban suspicious attempts from IPv6 clients if it does) or you can add the line allowipv6 = yes
to both the /etc/fail2ban/jail.local
file shown above and also the [Definition]
config group in /etc/fail2ban/fail2ban.local
, eg:
[Definition]
allowipv6 = yes
Finally, restart the fail2ban
systemd service, to pick up your new configuration:
sudo systemctl restart fail2ban.service
You can check fail2ban
started cleanly with:
sudo systemctl status fail2ban.service
Beware that Fail2Ban could potentially block you (well, your IP address) from having access to your own server for the bantime
period, if you fail SSH authentication maxretry
number of times in the findtime
window. So you should make sure automatic SSH key authentication into your non-root user account is already set up and working fine (see above) before enabling Fail2Ban and ending your current SSH session. Maybe try logging in, in a new window, first!
You can check which “jails” fail2ban has created with:
sudo fail2ban-client status
And then see how many IP addresses are in each of those jails, by passing the name of the jail as a second argument. Eg: for the sshd
service jail:
sudo fail2ban-client status sshd
See my post on setting up daily monitoring emails if you’d like to receive a regular email summary of how many IP addresses fail2ban has banned.
Set up Docker
Last time I set up a webserver (an EC2 instance in 2020) I configured a LAMP stack from scratch. It was horrendous. This time, I decided to use Docker as much as possible – both to compartmentalise projects and pieces of software, and also to create a reproducable build that could be torn down and recreated on another server if I ever needed to.
Install Docker by following the instructions here and then follow the Linux post-install steps, namely:
sudo groupadd docker
(already existed)sudo usermod -aG docker $USER
- Log out and back in
- Confirm your user can run
docker
commands withoutsudo
, eg:docker run hello-world
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
I also enabled the “local” logging driver with "log-driver": "local"
in Docker’s daemon.json
.
And finally, because I’m a lazy typist, I enabled bash completions for docker commands with docker completion bash > /etc/bash_completion.d/docker-compose.sh
(run in a root
shell).
Set up sendmail, logwatch, and sysstat
These aren’t necessary, but I wanted some form of regular monitoring of my server, via a daily/weekly email.
Setting these three things up is a little beyond the scope of this post – so I’ve written a more specific one here!
Set up Caddy via docker-compose
I created a directory at /opt/personal-hosting
to store my docker provisioning stuff, and also initialised that as a Git repo, so that I could track changes, and pull it to my local machine, to make editing easier.
I also created a directory at /srv
to store the source files for the simpler domains I wanted to host (eg: Jekyll’s static output for this blog, and the PHP source files for my parents’ website).
In all, the files and directories of interest were:
/
├ opt/
│ └ personal-hosting/
│ ├ etc-caddy/
│ │ ├ access_log.conf
│ │ ├ Caddyfile
│ │ └ security_headers.conf
│ ├ script/
│ │ └ caddy-reload
│ └ docker-compose.yml
├ srv/
│ ├ zarino.co.uk/
│ │ └ …
│ └ zappia.co.uk/
│ └ …
└ var/
└ log/
└ caddy/
docker-compose.yml
My initial docker-compose.yml
looked like this:
services:
caddy:
container_name: caddy
hostname: caddy
image: caddy:latest
restart: unless-stopped
depends_on:
- php-fpm
cap_add:
- NET_ADMIN
ports:
- "80:80"
- "443:443"
- "443:443/udp"
networks:
- caddynet
volumes:
# Share directory containing Caddyfile, rather than Caddyfile itself,
# because of https://github.com/caddyserver/caddy-docker/issues/364
- ./etc-caddy:/etc/caddy:ro
# Share vhost directories, to serve static files from.
- /srv:/srv
# Share /var/log/caddy (created with `mkdir` and chmodded to be writeable).
- /var/log/caddy:/var/log/caddy
# Persist Caddy data and config across container restarts.
- caddy_data:/data
- caddy_config:/config
php-fpm:
container_name: php-fpm
hostname: php-fpm
image: php:fpm
restart: unless-stopped
networks:
- caddynet
volumes:
- /srv:/var/www/html
networks:
caddynet:
attachable: true
driver: bridge
volumes:
caddy_data:
caddy_config:
With this, I am able to start and stop the entire set of containers with:
docker compose up -d
docker compose down
Or start/stop an individual container with, eg:
docker compose up -d caddy
docker compose down caddy
Most professional docker images are set to send their logging output to stdout, so you can read that output with, eg:
docker compose logs -f --tail 20 caddy
Commands that I run often, I tend to put into their own file, eg: script/caddy-reload
, which I run every time I’ve edited my Caddyfile:
docker compose exec caddy caddy reload --config /etc/caddy/Caddyfile
Of course I can also just SSH into the container for a service, if I need to do anything more involved, eg:
docker compose exec caddy bash
Caddy config
My Caddyfile
looks like this:
{
log default {
output stdout
format json
}
}
www.zarino.co.uk {
redir https://zarino.co.uk{uri}
}
zarino.co.uk {
import access_log.conf "zarino.co.uk"
import security_headers.conf
encode zstd gzip
root * /srv/zarino.co.uk
file_server
handle_errors {
rewrite * /{err.status_code}/
file_server
}
}
www.zappia.co.uk {
redir https://zappia.co.uk{uri}
}
zappia.co.uk {
import access_log.conf "zappia.co.uk"
import security_headers.conf
encode zstd gzip
root * /srv/zappia.co.uk
php_fastcgi php-fpm:9000 {
# Tell php-fpm where to find the PHP files _inside_ the Docker container.
# (Our docker-compose.yml maps /srv on the host to /var/www/html inside the container.)
root /var/www/html/zappia.co.uk
}
file_server
}
To save repeating the same config options again and again for each domain Caddy is hosting, I broke those out into their own partial files I could then import. Namely, access_log.conf
:
log {
output file /var/log/caddy/{args[0]}.log
}
And security_headers.conf
:
header /* {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options nosniff
X-Frame-Options sameorigin
Referrer-Policy strict-origin-when-cross-origin
Content-Security-Policy "default-src https:; font-src https: data:; img-src https: data: 'self' about:; script-src 'unsafe-inline' https: data:; style-src 'unsafe-inline' https:; connect-src https: data: 'self'"
}
Caddy handles the registration and management of SSL certificates for every domain, automatically. Which is, frankly, witchcraft.
Other things to note:
- The Caddyfile format is a breath of fresh air compared to nginx configs or (god forbid) Apache configs, but it still has its own quirks. In particular, note that directives inside your site blocks are re-ordered by Caddy before being applied, which can result in unexpected behaviour. I try to ensure the order of directives inside my blocks roughly matches the order that Caddy expects them, so there’s less opportunity for surprise.
- The
zarino.co.uk
site is simply hosting a bunch of static HTML, CSS, and image files, pre-compiled by Jekyll. So all it needs is afile_server
block to handle that. - The
zappia.co.uk
site, in comparison, is a PHP site. So it needs both thephp_fastcgi
block, and thefile_server
directive for any non-PHP static files. - The
php_fastcgi
block is communicating with thephp-fpm
container, over port9000
. It’s really nice being able to refer to services from mydocker-compose.yml
file, by their hostname, in thisCaddyfile
– especially when you set up containers for each WordPress site you’re hosting, for example.