SmallOps Part 2: Quadlets
In Part 1, I walked through how I deploy rootless podman containers as systemd user units in my homelab using jinja2 templates. I also said I don’t recommend that approach for anything new and that I replaced it with Quadlets in some client deployments. This post is about that replacement. I hope to migrate my homelab to quadlets in the near future but inertia is a bear…
For the most part, my templates and systemd units in the homelab work just fine and there isn’t a lot of reason for me to rip them out at the moment. However, after reading about Quadlets, I think they’re a better approach because they lean on podman’s own tooling rather than external glue I have to maintain. When I started standing up small services in a couple of my on-prem client sites, I decided to use Quadlets instead of another pile of jinja2 templates.
Quadlets are podman’s native systemd integration. They landed in podman 4.4
and have been stable for a bit now. Instead of writing a systemd unit with
podman run stuffed into ExecStart, you write a declarative .container
file. Podman ships a systemd generator that reads those files and emits real
.service units at daemon-reload time. You don’t see the generated unit
unless you go looking for it.
For a rootless user, the generator picks up files from a few places. The one
I use is ~/.config/containers/systemd/. The full search path also includes
$XDG_RUNTIME_DIR/containers/systemd/ and some /etc/containers/systemd/
locations for root or per-UID system configs.
There are companion file types for related resources: .network, .volume,
.pod, .build, and .kube. You can describe a custom podman network in a
.network file and reference it from one or more .container files, which
ends up being cleaner than managing the network as a separate ansible task.
Before and After
Here’s the jinja2 template from Part 1 that renders a systemd unit for my pi-hole container:
[Unit]
Description=Pi-hole Container
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
Restart=always
RestartSec=10
ExecStartPre=-/usr/bin/podman rm -f pihole
ExecStart=/usr/bin/podman run --name pihole \
{% for port in containers.pihole.ports %}
-p {{ port }} \
{% endfor %}
{% for key, val in containers.pihole.env.items() %}
-e {{ key }}={{ val }} \
{% endfor %}
{% for vol in containers.pihole.volumes %}
-v {{ vol }} \
{% endfor %}
--replace \
{{ containers.pihole.image }}
ExecStop=/usr/bin/podman stop pihole
ExecStopPost=-/usr/bin/podman rm -f pihole
[Install]
WantedBy=default.targetAnd here is roughly the same thing as a Quadlet:
1# ~/.config/containers/systemd/pihole.container
2[Unit]
3Description=Pi-hole
4
5[Container]
6Image=docker.io/pihole/pihole:latest
7PublishPort=192.168.50.15:53:53/tcp
8PublishPort=192.168.50.15:53:53/udp
9PublishPort=192.168.50.15:8080:80/tcp
10Volume=%h/containers/pihole/etc-pihole:/etc/pihole:Z
11Volume=%h/containers/pihole/etc-dnsmasq.d:/etc/dnsmasq.d:Z
12Environment=TZ=America/New_York
13Environment=DNSMASQ_LISTENING=all
14AutoUpdate=registry
15
16[Service]
17Restart=on-failure
18RestartSec=30
19
20[Install]
21WantedBy=default.targetA few things to point out here. All of the ExecStart, ExecStop, and
ExecStopPost lines are gone. I don’t need the podman rm -f safety net
because the generator handles cleanup on its own. I don’t need the
ExecStartPre pull hack either because AutoUpdate=registry is the supported
way to do that with a companion podman-auto-update.timer (more on that in a
future post, maybe). %h expands to the user’s home directory, so I don’t
have to hardcode paths for a service account.
I also don’t have to write any jinja2. The .container file is the source of
truth. Ansible just copies it into place.
Real Example: Reverse Proxy
Here is a lightly anonymized example from one of my client sites. The job is a reverse proxy in front of a couple of internal web services. It’s running rootless podman on a Fedora server under a dedicated service account. Nothing fancy.
The .container file lives in the ansible repo as files/caddy.container:
1[Unit]
2Description=Caddy reverse proxy
3
4[Container]
5ContainerName=caddy
6Image=docker.io/library/caddy:2
7Network=web
8PublishPort=3443:3443
9PublishPort=3444:3444
10PublishPort=3445:3445
11Exec=caddy run --config /etc/caddy/caddy.json
12Volume=%h/caddy-config/caddy.json:/etc/caddy/caddy.json:Z
13Volume=%h/caddy-data:/data:Z
14
15[Install]
16WantedBy=default.targetNetwork=web refers to a podman network I create once on the host. Every
service that needs to be reachable by the proxy joins it. I could describe
that network in a .network Quadlet file, but for these sites I create it
directly with the containers.podman.podman_network module because I was
already doing that before I moved to Quadlets and it still works fine.
The ansible that deploys it looks like this:
1- name: Open reverse proxy ports in firewall
2 become: true
3 ansible.posix.firewalld:
4 port: "{{ item }}"
5 permanent: true
6 immediate: true
7 state: enabled
8 loop:
9 - 3443/tcp
10 - 3444/tcp
11 - 3445/tcp
12
13- name: Create podman network for web services
14 containers.podman.podman_network:
15 name: web
16 state: present
17 become: false
18
19- name: Create Caddy data and config directories
20 ansible.builtin.file:
21 path: "{{ item }}"
22 state: directory
23 mode: "0755"
24 loop:
25 - ~/caddy-data
26 - ~/caddy-config
27 become: false
28
29- name: Deploy Caddy JSON config
30 ansible.builtin.copy:
31 src: files/caddy.json
32 dest: ~/caddy-config/caddy.json
33 mode: "0644"
34 become: false
35 register: caddy_config
36
37- name: Create Quadlet directory
38 ansible.builtin.file:
39 path: ~/.config/containers/systemd
40 state: directory
41 mode: "0755"
42 become: false
43
44- name: Deploy Caddy Quadlet unit
45 ansible.builtin.copy:
46 src: files/caddy.container
47 dest: ~/.config/containers/systemd/caddy.container
48 mode: "0644"
49 become: false
50 register: caddy_quadlet
51
52- name: Reload systemd user daemon
53 ansible.builtin.systemd:
54 daemon_reload: true
55 scope: user
56 become: false
57
58- name: Enable and start Caddy service
59 ansible.builtin.systemd:
60 name: caddy
61 scope: user
62 enabled: true
63 state: "{{ (caddy_quadlet.changed or caddy_config.changed) | ternary('restarted', 'started') }}"
64 become: falseCompared with Part 1, the big differences are:
ansible.builtin.copyinstead ofansible.builtin.templateso there isn’t a jinja2 rendering step and no per-container template file inplaybooks/templates/.- The service state is driven by whether the Quadlet file or its backing
config changed. The
ternarybit restarts only when something actually moved, which is the behavior I want for a long-lived proxy. - I can
systemctl --userthe service by its plain name (caddy). The generator names the service after the.containerfile, socaddy.containerbecomescaddy.service. - IPs and other host-specific values are hardcoded in the
.containerfile now that jinja2 isn’t in the loop, which isn’t great. For a single host it’s probably fine. For multiple hosts I’d either template the Quadlet file (back toansible.builtin.template, just producing a.containerinstead of a.service) or lean harder on Quadlet’s own substitutions like%hand drop site-specific bits into a.envfile pulled in withEnvironmentFile=. I haven’t had to do that yet because each client site is a single box, but it’ll come up.
To show how minimal these can get, here is the uptime kuma instace I run next to the proxy:
1[Unit]
2Description=Uptime monitoring
3
4[Container]
5ContainerName=uptime-monitor
6Image=docker.io/louislam/uptime-kuma:1
7Network=web
8Volume=%h/uptime-data:/app/data:Z
9
10[Install]
11WantedBy=default.targetIt’s pretty slim overall. The ansible play that deploys it is basically the same shape as the Caddy one with different file names. I like the shape of using quadlets better at this point. It’s less to manage and it lets me use existing tools instead of cobbling my own together.
Some references for this post:
#smallops #ansible #podman #quadlets #linux #devoops #devops #sysadmin #containers