SmallOps Part 1: Rootless Podman Containers as Systemd Units with jinja2 Tempates
When I started writing this post, it grew substantially larger than intended. Initially I set out to document how I’ve moved from deploying podman containers using jinja2 templates to using quadlets but it exposed an idea that’s been in the back of my head for a bit: SmallOps. General sysadmin/devoops stuff that isn’t big enterprise and isn’t directly MSP type management.
I’ve decided to break this information up into posts with some practical examples. I’m starting with how I deploy podman containers in my homelab using jinja2 templates. I’ll expand from there, probably moving beyond podman as a topic.
I am not a massive fan of systemd or k8s by any stretch of the imagination but I am also relatively pragmatic. I did not heavily adopt containers in my own infrastructure until recent years (though I’ve dealt with them professionally) for quite a while now. While the world rode the serverless hype-wave, I took time to evaluate my options over those years.
A lot of folks jump directly to k8s for their projects when it’s often not needed. I tend to only reach for k8s whenever I know that I will need to make large scaling leaps quickly. For everything else, a few containers running on a VM or VPS is totally fine (especially in the homelab).
My infra-related client base can be categorized by two groups at the moment:
- Cloud-dependent or cloud-native types
- On-prem traditional rack types
For the first group, I lean into k8s or various serverless options from the big cloud providers. Typically they either need scaling agility and/or they want more managed services and abstractions. For the second group, I lean into VMs and containers because they are usually running very small workloads. Both can be managed with OpenTofu and Asible equally well. I like to consider the second group as “SmallOps” and it’s a different kind of management approach.
I know Docker is the king of containers these days and I interface with it daily. For my homelab and the second group of clients, I’ve reached for Podman instead of docker for a couple of reasons: daemonless architecture and rootless operation. The third (and maybe most weighty reason) is because Fedora and RedHat servers have a high adoption rate among my clients. I would eventually like to look into k8s integrations with podman but that’s a post/exercise for another day.
Antiquated Deployments in the Homelab
A while back, my main home server shuffled off this mortal coil. I used the opportunity to completely start over and rebuild everything with cleaner playbooks. My container layout on the old server was chaotic at best and I wanted to get it under control.
I had conceded the systemd fight ages ago but knew I would have to deal with it in the homelab if I wanted to keep up with the rest of the world and have access to quality documentation. I had read about people creating systemd units for their containers and I thought this would be a useful thing to have in my environment.
At the time, I had not yet read about Quadlets, so I ended up using jinja2 templates in my playbooks that create systemd units for me. While there isn’t necessarily anything wrong with this process, I don’t recommend it for regular use. There’s easier ways to handle this these days but I’m writing it out for historical reference.
For this post, I’ll go over a high level look at what I have and then provide specific examples with my pi-hole container. My basic deployment pipeline in the homelab looks something like this:
- Container definitions live in yaml dicts in my ansible host_vars (one per host). Each container is defined with its image, ports, volumes, env vars, optional devices, and mount flags.
- Jinja2 templates in
playbooks/templates/generate systemd.servicefiles. Each container gets its own template (e.g.,container-pihole.service.j2). These are not generic; each template is written for the specific container’s needs. - Ansible deploys the rendered service files to
~/.config/systemd/user/on the target host, then triggerssystemctl --user daemon-reloadand restarts the service via handlers.
The systemd unit structure I have looks something like this for all of the containers:
[Unit]
Description=Podman container-<name>
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
ExecStartPre=/usr/bin/podman pull <image>
ExecStart=/usr/bin/podman run --rm --name <name> \
-p <host>:<container> \
-v <host_path>:<container_path>:Z \
-e KEY=VALUE \
<image>
ExecStop=/usr/bin/podman stop -t 10 <name>
ExecStopPost=/usr/bin/podman rm -f <name>
Restart=on-failure
RestartSec=30
[Install]
WantedBy=default.targetA few things to note here: ExecStartPre pulls the latest image before
starting (hacky auto-update mechanism because I am lazy). The --rm flag
cleans up the container on stop. ExecStopPost with podman rm -f is a
safety net of sorts to avoid a stopped but not removed state.
WantedBy=default.target (not multi-user.target) because these are user units.
I won’t get into variations by container for now but for my pi-hole, I need to
bind to port 53. This isn’t easily done so I have to allow users to do that.
This is pretty sketchy from my perspective. I would like to find a better way
to handle this at some point but the risk is tolerable in my homelab. Here’s
the snippet in my ‘site.yaml’ file that sets that up:
1- name: Allow unprivileged users to bind low ports
2 become: true
3 ansible.posix.sysctl:
4 name: net.ipv4.ip_unprivileged_port_start
5 value: "53"
6 sysctl_set: true
7 state: present
8 reload: true
9 tags: [containers, system]Also, one other quirk i use is that I enable lingering for user services:
1- name: Enable lingering for {{ primary_user }}
2 become: true
3 ansible.builtin.command:
4 cmd: loginctl enable-linger {{ primary_user }}
5 changed_when: false
6 tags: [containers, system]Going back to the deployment pipeline I mentioned above, here is my container
definition for my host_vars file for the server pi-hole lives on (and I need to fix that timezone…):
1pihole:
2 image: docker.io/pihole/pihole:latest
3 ports:
4 - "{{ bind_ip }}:53:53/tcp"
5 - "{{ bind_ip }}:53:53/udp"
6 - "127.0.0.1:53:53/tcp"
7 - "127.0.0.1:53:53/udp"
8 - "{{ bind_ip }}:8080:80/tcp"
9 env:
10 TZ: America/New_York
11 DNSMASQ_LISTENING: all
12 volumes:
13 - "{{ containers_dir }}/pihole/etc-pihole:/etc/pihole:Z"
14 - "{{ containers_dir }}/pihole/etc-dnsmasq.d:/etc/dnsmasq.d:Z"Here is the jinja2 template I use for my pi-hole container:
[Unit]
Description=Pi-hole Container
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
Restart=always
RestartSec=10
ExecStartPre=-/usr/bin/podman rm -f pihole
ExecStart=/usr/bin/podman run --name pihole \
{% for port in containers.pihole.ports %}
-p {{ port }} \
{% endfor %}
{% for key, val in containers.pihole.env.items() %}
-e {{ key }}={{ val }} \
{% endfor %}
{% for vol in containers.pihole.volumes %}
-v {{ vol }} \
{% endfor %}
--replace \
{{ containers.pihole.image }}
ExecStop=/usr/bin/podman stop pihole
ExecStopPost=-/usr/bin/podman rm -f pihole
[Install]
WantedBy=default.targetLastly here’s the remaining setup bits in my site.yml file:
1- name: Create container directories
2 become: true
3 ansible.builtin.file:
4 path: "{{ item }}"
5 state: directory
6 owner: "{{ primary_user }}"
7 group: "{{ primary_user }}"
8 mode: "0755"
9 loop:
10 # other dir defs go here
11 - "{{ containers_dir }}/pihole"
12 - "{{ containers_dir }}/pihole/etc-pihole"
13 - "{{ containers_dir }}/pihole/etc-dnsmasq.d"
14 # other dir defs go here, too
15 tags: [containers]
16
17- name: Create user systemd directory
18 become: true
19 ansible.builtin.file:
20 path: "{{ primary_home }}/.config/systemd/user"
21 state: directory
22 owner: "{{ primary_user }}"
23 group: "{{ primary_user }}"
24 mode: "0755"
25 tags: [containers]
26
27# install other services before this
28- name: Install Pi-hole user service
29 become: true
30 become_user: "{{ primary_user }}"
31 ansible.builtin.template:
32 src: templates/container-pihole.service.j2
33 dest: "{{ primary_home }}/.config/systemd/user/container-pihole.service"
34 mode: "0644"
35 notify: reload user systemd
36 tags: [containers]
37
38- name: Flush handlers before enabling services
39 ansible.builtin.meta: flush_handlers
40
41- name: Enable and start container services
42 become: true
43 become_user: "{{ primary_user }}"
44 ansible.builtin.shell: |
45 export XDG_RUNTIME_DIR=/run/user/$(id -u)
46 export DBUS_SESSION_BUS_ADDRESS=unix:path=$XDG_RUNTIME_DIR/bus
47 systemctl --user enable {{ item }}
48 systemctl --user start {{ item }}
49 args:
50 executable: /bin/bash
51 loop:
52 # other services here
53 - container-pihole.service
54 # yet more services here
55 when: not ansible_check_mode
56 changed_when: true
57 tags: [containers]With all of that configuration out of the way, the final process looks something like this:
- I run the play with
ansible-playbook -i playbooks/inventory.ini playbooks/site.yml - Ansible picks up my inventory and vars
- Lingering is enabled so my containers don’t stop on logout
- The appropriate directories are created
- The template is rendered and the reload user systemd handler is notified
- Flush handlers is run so that systemd picks up the new/changed unit
- The container is enabled/started
So far this is idempotent and works pretty well. I might upload the playbooks to a public git forge in the future for some clarity. For the moment, feel free to reach out on the fediverse or via email if this is confusing.
As I stated earlier in the post, I don’t recommend this approach. In the next post in this series, I’ll cover how I replaced all of this with quadlets when I began deploying these sorts of services in my clients’ networks.
Some references for this post and the next:
#smallops #ansible #podman #linux #devoops #devops #sysadmin #containers