If you're running more than a handful of Docker containers in your homelab, you already know the pain. You've got Jellyfin, Vaultwarden, Gitea, a reverse proxy, monitoring stack, maybe a game server or two — and every single one of them needs updating at some point. Manually checking for new images, pulling them down, recreating containers, hoping nothing breaks... it gets old fast.
Worse still, if you forget about one for a few months, you might be running a version with known vulnerabilities and not even realise it. Not ideal.
So let's talk about Tugtainer — a tool that takes the hassle out of keeping your Docker containers up to date, and does it with a proper web UI so you can actually see what's going on.
What is Tugtainer?
Tugtainer is an open-source, self-hosted application built by Quenary for automating Docker container updates. It gives you a clean web interface where you can see all your running containers, check which ones have updates available, and either update them automatically on a schedule or do it manually with a click.
What sets it apart from other update tools is its hub and agent architecture. You run the main Tugtainer hub on one machine, and then deploy lightweight agents on any other Docker hosts you want to manage. Everything gets controlled from a single dashboard. If you've got containers spread across multiple servers — and let's be honest, most homelabs end up that way — this is a massive quality of life improvement.
Hub + Agent Architecture
The architecture is pretty straightforward once you wrap your head around it:
- The Hub is the main Tugtainer instance. It runs the web UI on port 9412 and is where you manage everything — container updates, schedules, notifications, the lot. The hub can also manage containers on the machine it's running on by mounting the Docker socket.
- Agents are lightweight containers that run on your remote Docker hosts. Each agent connects back to the hub and exposes the Docker environment on that host. They listen on port 8001 (mapped to whatever you like on the host side) and authenticate with a shared secret.
So if you've got three servers in your homelab — say a main box, a NAS, and a little Raspberry Pi running a few things — you run the hub on your main box and an agent on each of the other two. Then you manage all your containers across all three machines from one dashboard. No SSH-ing into each box, no remembering which containers live where. It's all right there.
Setting Up the Hub
Getting the hub running is dead simple. Create a volume for persistent data, then fire up the container:
docker volume create tugtainer_data
docker run -d -p 9412:80 \
--name=tugtainer \
--restart=unless-stopped \
-v tugtainer_data:/tugtainer \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
ghcr.io/quenary/tugtainer:1
That's it. The hub will start up and you can access the web UI at http://your-server-ip:9412. The Docker socket is mounted read-only so Tugtainer can see your containers and pull new images, but it's not running with full write access to the socket — which is a nice touch from a security perspective.
From the web UI you'll be able to see all the containers running on the hub's host straight away. Have a poke around — it's pretty intuitive.
Setting Up an Agent on a Remote Host
For each additional Docker host you want to manage, you deploy an agent. SSH into the remote machine and run:
docker run -d -p 9413:8001 \
--name=tugtainer-agent \
--restart=unless-stopped \
-e AGENT_SECRET="your-secret-here" \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
ghcr.io/quenary/tugtainer-agent:1
Replace your-secret-here with a decent password or passphrase. This secret is what the hub uses to authenticate with the agent, so make it something solid.
Once the agent is running, head back to the Tugtainer hub web UI and navigate to Menu > Hosts. Add the agent's IP address and port (9413 in this example), pop in the secret you set, and you're connected. The remote host's containers will show up in your dashboard alongside everything else.
Key Features
Tugtainer packs in a fair few features for a relatively young project:
- Web UI for managing all your containers in one place — No more terminal-only workflows. You can see at a glance which containers have updates available, which are up to date, and which need attention.
- Per-container configuration — You get granular control over each container. Want some to auto-update as soon as a new image drops? Done. Want others to just notify you that an update is available so you can do it manually? Also done. You choose what level of automation you're comfortable with for each container.
- Docker Compose awareness — This is a big one. Tugtainer understands Docker Compose project dependencies. When updating containers that are part of a Compose stack, it stops them from most-dependent first and starts them back up in reverse order. No more broken stacks because a database container got yanked out from under your app.
- Crontab scheduling — Set up automated update checks on whatever schedule suits you. Daily at 3am? Weekly on a Sunday? Every six hours? Configure it with standard crontab syntax and let it do its thing.
- Private registry support — If you're pulling images from a private registry (your own GitLab, GitHub Container Registry with auth, whatever), Tugtainer handles that too.
- Notifications via Apprise — Supports a massive list of notification services including Discord, Telegram, Slack, email, and heaps more. More on this below.
Adding Remote Hosts
Once you've got agents running on your other machines, adding them to the hub is a doddle:
- In the Tugtainer web UI, navigate to Menu > Hosts.
- Click to add a new host.
- Enter the agent's IP address and port (e.g.,
192.168.1.50:9413). - Enter the agent secret you configured when you started the agent container.
- Save, and the remote host's containers will appear in your dashboard.
You can add as many hosts as you like. Each one shows up as a separate host in the UI, so you can easily see which containers are running where and manage them all from the one spot.
Notification Setup
Tugtainer uses Apprise for notifications, which is brilliant because Apprise supports an absolutely massive list of notification services. We're talking Discord, Telegram, Slack, Pushover, email (SMTP), Gotify, ntfy, Matrix, Microsoft Teams, and dozens more. If it can receive a notification, Apprise probably supports it.
You configure notifications in the Tugtainer settings panel. Just add your Apprise URL for whatever service you want to use and you're sorted.
Notification results tell you exactly what happened with each container:
- not_available — No update available, container is already on the latest image.
- available — A newer image exists but the container hasn't been updated (useful if you've set it to notify-only mode).
- updated — The container was successfully updated to the new image.
- rolled_back — The update was attempted but something went wrong, so it rolled back to the previous image.
- failed — The update failed and couldn't be rolled back. Time to investigate.
Having rollback built into the notification flow is a nice touch. If an update breaks a container, Tugtainer will try to put it back the way it was and let you know about it. Better than finding out at 2am when your media server stops working.
Limitations
No tool is perfect, and Tugtainer has a few things to be aware of:
- Cannot update itself. Tugtainer won't update the tugtainer, tugtainer-agent, or socket-proxy containers. You'll need to update these manually. Makes sense when you think about it — you don't want the updater to pull the rug out from under itself mid-update.
- Not production-ready (yet). The developer is upfront about this — Tugtainer is still in active development and isn't recommended for production workloads. For homelab use though? It works a treat. Just don't go deploying it to manage your company's container fleet without proper testing.
Tugtainer vs Watchtower
If you've been in the Docker homelab space for any amount of time, you've probably heard of Watchtower. It's been the go-to for automatic container updates for years. So how does Tugtainer stack up?
- Web UI: Tugtainer has one. Watchtower doesn't — it's entirely headless and configured through environment variables and command-line flags. If you want to see what's happening, you read the logs. Tugtainer gives you a proper dashboard.
- Multi-host management: Tugtainer supports managing containers across multiple Docker hosts via its agent architecture. Watchtower is single-host only — you need a separate Watchtower instance on each machine, each configured independently.
- Docker Compose awareness: Tugtainer understands Compose project dependencies and handles shutdown/startup order correctly. Watchtower doesn't have this awareness — it just updates containers individually.
- Maturity: Watchtower has been around for years, has a larger user base, and is more battle-tested. It's a known quantity. Tugtainer is the newer kid on the block with more features but less mileage.
- Simplicity: Watchtower is dead simple to deploy — one container, a few environment variables, done. Tugtainer has a bit more setup involved, especially if you're deploying agents, but you get a lot more control in return.
Honestly, both tools have their place. If you want something simple that just quietly updates your containers in the background and you only have one Docker host, Watchtower is still perfectly fine. But if you want visibility into what's happening, control over which containers get updated and when, and you're managing multiple servers — Tugtainer is well worth a look.
Wrapping Up
Keeping Docker containers updated is one of those chores that's easy to put off and easy to forget about. Tugtainer takes that problem off your plate and gives you a proper interface to manage it all. The hub and agent architecture means it scales across your whole homelab, and the Compose-aware updating means it's not going to make a mess of your stacks.
It's still a young project, but it's actively developed and already covers the features most homelabbers need. Give it a go — worst case, you spin it down and go back to manually pulling images at midnight. But I reckon once you've got that dashboard showing all your containers across all your hosts, you won't want to go back.
Peebee Software Solutions