If you've been building your own Docker images for your homelab or side projects, you've probably been pushing them to Docker Hub. And that works fine — until you hit the rate limits, or you realise you're uploading your custom configs to someone else's servers, or you just get sick of waiting for pulls to come back over the internet when the image is sitting on a machine two metres away.

The good news is, running your own private Docker Registry is dead simple. Chuck a web UI on top of it and you've got yourself a proper self-hosted container image store with a nice interface for browsing and managing your images. Let's get into it.

Why Host Your Own Registry?

There are a few solid reasons to run your own:

  • Keep your images private. Your custom images stay on your network. No third parties involved. If you're baking secrets or configs into images (you probably shouldn't, but we've all done it), at least they're not sitting on someone else's infrastructure.
  • Faster pulls on your local network. Pulling a 500MB image from Docker Hub takes a while. Pulling the same image from a registry on your LAN? Basically instant. If you're spinning up containers regularly or redeploying across multiple machines, the speed difference is noticeable.
  • No Docker Hub rate limits. Docker Hub's free tier has pull rate limits — 100 pulls per 6 hours for anonymous users, 200 for authenticated. If you've got a bunch of machines pulling images, you can chew through that pretty quick. Your own registry has no limits.
  • Full control. You decide how long images are kept, when garbage collection runs, who has access — the lot. It's your infrastructure, your rules.

What You'll Need

Not much, honestly. If you've already got a machine running Docker and Docker Compose, you're good to go. That's literally it. If you followed along with the HP EliteDesk homelab post, you've already got the perfect box for this.

Setting Up the Registry

Docker's official registry image makes this a one-service Compose file. Create a new directory for your registry project and add a docker-compose.yml:

version: "3.8"

services:
  registry:
    image: registry:2
    container_name: docker-registry
    restart: unless-stopped
    ports:
      - "5000:5000"
    volumes:
      - registry-data:/var/lib/registry
    environment:
      REGISTRY_STORAGE_DELETE_ENABLED: "true"

volumes:
  registry-data:

That's the whole thing. The registry:2 image is Docker's official registry. We're mapping port 5000, giving it a named volume so your images survive container restarts, and enabling the delete API so we can clean up old images later. The unless-stopped restart policy means it'll come back up after a reboot automatically.

Fire it up:

docker compose up -d

Testing It Works

Let's make sure the registry is accepting images. We'll pull a small public image, tag it for our registry, push it, then pull it back down.

# Pull a small test image from Docker Hub
docker pull alpine:latest

# Tag it for your local registry
# Replace "localhost" with your server's IP if you're on another machine
docker tag alpine:latest localhost:5000/my-alpine:latest

# Push it to your registry
docker push localhost:5000/my-alpine:latest

# Remove the local copies to prove the pull works
docker rmi alpine:latest
docker rmi localhost:5000/my-alpine:latest

# Pull it back from your registry
docker pull localhost:5000/my-alpine:latest

If that all works without errors, your registry is up and running. Bonzer.

Adding a Web UI

A registry on its own works fine, but it's a bit of a black box. You can't easily browse what images are in there, check tags, or delete old ones without hitting the API directly. That's where joxit/docker-registry-ui comes in — it gives you a clean web interface for your registry.

Let's add it to our Compose file. Here's the updated docker-compose.yml with both services:

version: "3.8"

services:
  registry:
    image: registry:2
    container_name: docker-registry
    restart: unless-stopped
    ports:
      - "5000:5000"
    volumes:
      - registry-data:/var/lib/registry
    environment:
      REGISTRY_STORAGE_DELETE_ENABLED: "true"
      REGISTRY_HTTP_HEADERS_Access-Control-Allow-Origin: '["*"]'
      REGISTRY_HTTP_HEADERS_Access-Control-Allow-Methods: '["HEAD", "GET", "OPTIONS", "DELETE"]'
      REGISTRY_HTTP_HEADERS_Access-Control-Allow-Headers: '["Authorization", "Accept", "Cache-Control"]'
      REGISTRY_HTTP_HEADERS_Access-Control-Expose-Headers: '["Docker-Content-Digest"]'

  registry-ui:
    image: joxit/docker-registry-ui:latest
    container_name: docker-registry-ui
    restart: unless-stopped
    ports:
      - "5080:80"
    environment:
      REGISTRY_TITLE: "My Docker Registry"
      NGINX_PROXY_PASS_URL: "http://registry:5000"
      DELETE_IMAGES: "true"
      SINGLE_REGISTRY: "true"
    depends_on:
      - registry

volumes:
  registry-data:

A few things to note here. The UI container runs on port 5080, so you'll access it at http://your-server-ip:5080 in your browser. The NGINX_PROXY_PASS_URL tells the UI where to find the registry — since they're on the same Docker network, we just use the service name. We've also added CORS headers to the registry service so the UI's browser-based requests don't get blocked.

The DELETE_IMAGES environment variable enables a delete button in the UI, which is handy for cleaning up old tags you don't need anymore.

Bring it all up:

docker compose up -d

Now open http://your-server-ip:5080 in your browser and you should see a clean interface showing your registry's contents. If you pushed that test Alpine image earlier, it'll be sitting right there.

Configuring Docker to Trust Your Registry

Since we're running this over plain HTTP on the local network (no TLS), Docker on other machines will refuse to talk to it by default. Docker expects registries to use HTTPS unless you explicitly tell it otherwise.

On every machine that needs to push or pull from your registry, you'll need to edit (or create) the Docker daemon config file at /etc/docker/daemon.json:

{
  "insecure-registries": ["your-server-ip:5000"]
}

Replace your-server-ip with the actual IP address of the machine running the registry — something like 192.168.1.50:5000.

Then restart Docker:

sudo systemctl restart docker
Note: This is fine for a home network where you trust all the devices. If you're exposing your registry beyond your LAN, you really should set up TLS with a proper certificate instead of using insecure registries. Let's Encrypt makes this free and fairly painless with a reverse proxy like Traefik or Caddy in front.

Using It Day-to-Day

Once everything's running, the workflow is pretty straightforward. Say you've built a custom image on your dev machine:

# Build your image as normal
docker build -t my-cool-app:v1.0 .

# Tag it for your private registry
docker tag my-cool-app:v1.0 192.168.1.50:5000/my-cool-app:v1.0

# Push it
docker push 192.168.1.50:5000/my-cool-app:v1.0

Then on any other machine on your network (that has the insecure-registries config), you can pull it straight down:

# On another machine
docker pull 192.168.1.50:5000/my-cool-app:v1.0

You can also reference the image directly in your docker-compose.yml files on other machines:

services:
  my-app:
    image: 192.168.1.50:5000/my-cool-app:v1.0
    restart: unless-stopped
    ports:
      - "8080:8080"

No more messing about with Docker Hub logins or worrying about rate limits. It just works.

Cleaning Up Old Images

Over time, your registry will accumulate old image layers that nothing references anymore. You can reclaim that space with the registry's built-in garbage collection.

First, you can delete specific tags through the web UI if you enabled DELETE_IMAGES. But that only marks the layers for deletion — it doesn't actually free up disk space. For that, you need to run garbage collection:

# Run garbage collection (dry run first to see what would be removed)
docker exec docker-registry bin/registry garbage-collect \
  /etc/docker/registry/config.yml --dry-run

# If that looks right, run it for real
docker exec docker-registry bin/registry garbage-collect \
  /etc/docker/registry/config.yml

It's worth running this every now and then, especially if you're pushing new versions of images frequently. You could even set up a cron job to run it weekly if you're keen.

Tips and Things to Keep in Mind

  • Back up the volume. Your images live in the registry-data Docker volume. If that drive dies, your images are gone. You can find where Docker stores volumes with docker volume inspect registry-data and back up that directory, or use docker run --rm -v registry-data:/data -v $(pwd):/backup alpine tar czf /backup/registry-backup.tar.gz /data to create a compressed backup.
  • Consider basic auth if you're exposing it. If your registry needs to be accessible beyond your LAN — say through a VPN or over the internet — you should add authentication. The registry supports htpasswd-based basic auth natively. Generate a password file with htpasswd and mount it into the container. The Docker docs have a good walkthrough on this.
  • Storage can creep up on you. Container images aren't small, especially if you're building big application images. Keep an eye on your disk usage and run garbage collection regularly.
  • Use meaningful tags. Don't just push everything as :latest. Use version numbers, git commit hashes, or date-based tags so you can actually tell your images apart. Your future self will thank you.
  • The UI is read-heavy. The joxit registry UI is great for browsing and the occasional delete, but it's not designed for heavy management tasks. For anything more complex, the registry's HTTP API is well-documented and easy to script against with curl.
Tip: If you're running multiple homelab machines, set up the insecure-registries config on all of them with an Ansible playbook or a simple bash script. Saves you doing it manually on each box.

Wrapping Up

That's it — you've got a private Docker Registry with a web UI running on your local network. It takes about five minutes to set up, costs nothing, and makes managing your custom container images a whole lot easier. No more Docker Hub rate limits, no more pushing private images to public infrastructure, and no more waiting for pulls to come over the internet when the image is sitting right there on your LAN.

It's one of those homelab services that once you set up, you wonder how you ever got by without it. Give it a go.