Setting Up a Private Docker Registry for Your Homelab
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
Once you start running Docker containers regularly in your homelab, you'll quickly realize that repeatedly pulling images from Docker Hub gets old. Bandwidth costs money, builds take forever, and you're exposing your pull patterns to Cloudflare. I switched to running my own private registry six months ago, and it's transformed how I deploy applications. No more throttling, no rate limits, and complete control over what images live where.
In this tutorial, I'll walk you through standing up a private Docker registry on your homelab, securing it properly, and integrating it into your deployment pipeline. We'll use Docker Registry 2 (the official implementation) and Docker Compose to make it bulletproof.
Why Run Your Own Registry?
The public Docker Hub works fine for pulling official images, but it has real limitations for self-hosters. Free accounts hit rate limits after a certain number of pulls per hour. Docker also deprecated their free team organizations, which broke a lot of small self-hosting workflows. Plus, if you're building custom images for internal services—Ollama fine-tuned models, Nextcloud with custom plugins, patched versions of software—you need somewhere to store them.
A private registry lets you:
- Cache images locally, reducing bandwidth and speeding up deployments
- Mirror public registries (Docker Hub, GHCR, Quay) without rate limits
- Store proprietary or security-sensitive images completely offline
- Control exactly who can push and pull images on your network
- Avoid platform lock-in to cloud container services
The tradeoff is minimal. A private registry uses under 500MB RAM and can live on the same hardware running your other services. I run mine on a used Lenovo ThinkCentre M93p (Intel i5, 8GB RAM) alongside Nextcloud, Gitea, and a dozen other containers—total cost around $120 secondhand, not counting whatever VPS you're already paying for.
Hardware and Network Requirements
You don't need much. Docker Registry 2 is lean. What matters is storage and network:
- CPU: 2+ cores. Registry is I/O bound, not CPU bound.
- RAM: 512MB minimum; 2GB if you're running other services.
- Storage: Depends on your image library. I store about 60 images and use 80GB. Start with 50GB as a baseline.
- Network: For your homelab, a gigabit connection is plenty. If you're pushing this to a VPS, that VPS needs decent upload bandwidth.
I recommend dedicating a small VM or cheap second machine. If you're already running a VPS for self-hosting—companies like RackNerd offer reliable VPS starting around $40/year with enough resources to run a registry comfortably—you can absolutely run your registry there alongside Nextcloud, Vaultwarden, and other services.
Setting Up the Private Registry with Docker Compose
The cleanest way to deploy Registry 2 is Docker Compose. You'll set up the registry itself, add authentication (basic auth via htpasswd), and optionally enable a web UI for browsing stored images.
First, create a directory structure for your registry:
mkdir -p /data/docker-registry/{data,auth,certs}
cd /data/docker-registry
Generate a basic auth file for authentication. This uses bcrypt hashing, which is more secure than plaintext:
docker run --rm --entrypoint htpasswd registry:2 -Bbc /dev/stdout myuser mysecurepassword > auth/htpasswd
Replace myuser and mysecurepassword with your actual credentials. You can add more users by running the command again with -b instead of -Bbc and appending to the file.
Now create your docker-compose.yml:
version: '3.8'
services:
registry:
image: registry:2
container_name: docker-registry
restart: always
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_STORAGE_DELETE_ENABLED: "true"
REGISTRY_LOG_LEVEL: info
volumes:
- ./data:/var/lib/registry
- ./auth:/auth
- ./certs:/certs
labels:
- "com.example.description=Private Docker Registry"
registry-ui:
image: joxit/docker-registry-ui:latest
container_name: registry-ui
restart: always
ports:
- "5001:80"
environment:
SINGLE_REGISTRY: "true"
REGISTRY_TITLE: "My Private Registry"
REGISTRY_URL: http://registry:5000
VERIFY_SSL: "false"
depends_on:
- registry
networks:
default:
name: registry-network
Start the services:
docker-compose up -d
Your registry is now running on port 5000, and the web UI is accessible at port 5001. Test authentication:
curl -u myuser:mysecurepassword http://localhost:5000/v2/
If you see {} returned, authentication is working.
Adding HTTPS and Remote Access
If you're accessing the registry only from your homelab LAN, HTTP is acceptable (though not ideal). If you want to access it from outside your network or push production images, TLS certificates are mandatory.
Generate a self-signed certificate for testing:
cd /data/docker-registry/certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt \
-subj "/C=US/ST=State/L=City/O=Org/CN=registry.example.local"
Update your docker-compose.yml to enable HTTPS:
services:
registry:
image: registry:2
container_name: docker-registry
restart: always
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_STORAGE_DELETE_ENABLED: "true"
REGISTRY_LOG_LEVEL: info
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
volumes:
- ./data:/var/lib/registry
- ./auth:/auth
- ./certs:/certs
labels:
- "com.example.description=Private Docker Registry"
Restart the registry:
docker-compose restart registry
For remote clients using self-signed certs, you'll need to either trust the certificate on each client or run your registry behind a reverse proxy (Caddy, Nginx, or Traefik) with Let's Encrypt certificates. I strongly recommend the reverse proxy approach for anything production-facing.
Configuring Docker Clients to Use Your Registry
On any machine where you want to push or pull images from your private registry, add the registry to Docker's daemon config. Edit /etc/docker/daemon.json on Linux or ~/.docker/config.json on Mac/Windows:
{
"insecure-registries": ["registry.example.local:5000"],
"registry-mirrors": ["http://registry.example.local:5000"]
}
If you're using TLS, change insecure-registries to registries. Restart Docker after making changes:
sudo systemctl restart docker
Now you can log in and push images:
docker login -u myuser -p mysecurepassword registry.example.local:5000
docker tag myapp:latest registry.example.local:5000/myapp:latest
docker push registry.example.local:5000/myapp:latest
Pull the image on any client in your network:
docker pull registry.example.local:5000/myapp:latest
Storage and Garbage Collection
Your registry stores images as layers in the ./data directory. Over time, untagged layers accumulate when you rebuild images or push new versions. Docker Registry 2 doesn't clean these up automatically (by design—they might be referenced elsewhere).
Enable garbage collection by adding a cron job or systemd timer. On your registry host, create /etc/cron.daily/docker-registry-gc:
#!/bin/bash
cd /data/docker-registry
docker exec docker-registry registry garbage-collect /etc/docker/registry/config.yml
Make it executable:
chmod +x /etc/cron.daily/docker-registry-gc
This runs daily and removes unreferenced layers. Check logs to confirm it's working:
docker logs docker-registry | grep -i garbage
Mirroring Docker Hub (Optional but Powerful)
One of the best features of a private registry is acting as a pull-through cache for Docker Hub. When you pull an image you don't have locally, the registry fetches it from Docker Hub, caches it, and serves subsequent pulls from cache—no rate limits, instant pulls.
Create a new service in your compose file for the mirror:
registry-mirror:
image: registry:2
container_name: docker-registry-mirror
restart: always
ports:
- "5002:5000"
environment:
REGISTRY_PROXY_REMOTEURL: https://registry-1.docker.io
REGISTRY_LOG_LEVEL: info
volumes:
- ./mirror-data:/var/lib/registry
Then on your Docker clients, add the mirror as a registry mirror in /etc/docker/daemon.json:
{
"registry-mirrors": ["http://registry.example.local:5002"]
}
Docker will now automatically use your mirror for any pulls that don't specify a custom registry. First pull is slow (caches), second pull is instant.
Monitoring and Maintenance
Check registry health periodically:
docker logs docker-registry | tail -20
List all images in your registry via the API:
curl -u myuser:mysecurepassword https://registry.example.local:5000/v2/_catalog
Monitor disk usage on your registry host—if you're approaching capacity, increase storage or enable more aggressive garbage collection.
Back up your ./data directory regularly if these images are important. I sync mine to a secondary NAS every week using rsync:
rsync -av /data/docker-registry/data/ nas:/backups/docker-registry/
Wrapping Up
A private Docker registry transforms your homelab from pulling everything from the internet on demand to running with local caches and full control. The setup takes 15 minutes, and the benefits compound over time—faster deployments, no rate limits, and peace of mind knowing your images are stored locally.
Start with the basic HTTP setup for your LAN. Once you're comfortable, add TLS and remote access. If you don't have spare hardware, a cheap VPS (RackNerd and similar providers offer reliable options around $40/year) works perfectly as a registry host, especially if you're already running other services there.
Next steps: integrate your registry into a CI/CD pipeline (Gitea with local builds), set up image scanning with Trivy, or explore registry replication across multiple machines for redundancy.
Discussion