Hardening Docker Security: Best Practices for Self-Hosted Containers
When I first ran Docker on my homelab server, I treated it like a development playground—wide-open permissions, root-everything, no thought to isolation. Then I realized I was hosting services for family, keeping important data, and leaving myself exposed to privilege escalation attacks. Now I run every container with the assumption that something inside it might get compromised, and I design accordingly.
Most self-hosted Docker deployments I see are vulnerable not because Docker itself is weak, but because operators skip the hardening layer. This tutorial walks through real, practical security improvements you can apply today—from capability dropping to read-only filesystems to network policies. I'm sharing the exact configs I use in production.
Why Docker Security Matters in Your Homelab
Docker containers aren't lightweight virtual machines. They share the host kernel, and a root process inside a container is still a privileged process on your system. If an attacker breaks out of a container (via kernel vulnerability, misconfiguration, or privilege escalation), they have access to your entire host and every other container.
In a homelab context, that means:
- Your backup server becomes a target. If Nextcloud gets compromised, the attacker can delete your backups.
- Network segmentation fails. A poorly isolated container can reach your internal DNS, database, or management interfaces.
- Host resources are abused. A container with unlimited CPU/memory can DoS your entire system.
The good news: Docker has strong security primitives. Most attacks succeed because we don't use them.
Foundation: Never Run as Root
The single biggest win is running containers as non-root users. This isn't optional—it's the baseline.
When you specify a USER directive in your Dockerfile, processes run with that UID. If they escape the container, they're still that unprivileged user. This stops 80% of privilege escalation paths cold.
FROM alpine:3.19
RUN addgroup -g 1000 appuser && \
adduser -D -u 1000 -G appuser appuser
COPY --chown=appuser:appuser ./app /app
WORKDIR /app
USER appuser
CMD ["./myapp"]
If you're running third-party images (Nextcloud, Jellyfin, etc.), check the Dockerfile. If it doesn't have a USER directive, rebuild it with one or use a Docker Compose override.
Drop Linux Capabilities
Linux capabilities break root privilege into granular permissions. A root user can do everything; a non-root user with CAP_NET_BIND_SERVICE can only bind ports below 1024. This is powerful.
By default, Docker grants containers a lot of dangerous capabilities. I drop almost everything and add back only what's needed.
version: '3.8'
services:
nextcloud:
image: nextcloud:28-apache
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
- CHOWN
- DAC_OVERRIDE
- SETFCAP
- SETGID
- SETUID
user: 33:33
volumes:
- ./data:/var/www/html
environment:
NEXTCLOUD_ADMIN_USER: admin
NEXTCLOUD_ADMIN_PASSWORD: changeme
restart: unless-stopped
vaultwarden:
image: vaultwarden/server:latest
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
user: nobody
volumes:
- ./vw-data:/data
environment:
DOMAIN: https://vault.example.com
ROCKET_PORT: 80
restart: unless-stopped
jellyfin:
image: jellyfin/jellyfin:latest
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
user: 1000:1000
volumes:
- ./config:/config
- ./media:/media
- /etc/timezone:/etc/timezone:ro
devices:
- /dev/dri:/dev/dri
restart: unless-stopped
The pattern is: cap_drop: [ALL] then cap_add only what the service actually needs. Most services need only 3-5 capabilities. Check the application's documentation or run it with dropped caps and watch the logs—it'll complain loudly if something's missing.
Common capabilities you might need:
NET_BIND_SERVICE– Bind to ports below 1024.CHOWN– Change file ownership.DAC_OVERRIDE– Bypass file permission checks.SETUID/SETGID– Change process UID/GID.NET_ADMIN– Manage network interfaces (rarely needed).
Read-Only Root Filesystem
A container that can't write to its root filesystem is much harder to weaponize. If an attacker tries to drop a malicious binary, add a backdoor user, or modify the application code, they fail immediately.
Enable this with read_only: true in Docker Compose, then whitelist directories that need writes as tmpfs or volumes:
version: '3.8'
services:
adguardhome:
image: adguard/adguardhome:latest
read_only: true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
- NET_RAW
user: nobody
volumes:
- ./work:/opt/adguardhome/work
- ./conf:/opt/adguardhome/conf
- /etc/timezone:/etc/timezone:ro
tmpfs:
- /tmp
- /run
ports:
- "53:53/tcp"
- "53:53/udp"
- "3000:3000/tcp"
restart: unless-stopped
gitea:
image: gitea/gitea:1.21
read_only: true
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
- NET_BIND_SERVICE
user: 1000:1000
volumes:
- ./data:/data
- /etc/timezone:/etc/timezone:ro
tmpfs:
- /tmp
- /run
- /app/gitea/temp
environment:
USER_UID: 1000
USER_GID: 1000
restart: unless-stopped
The tmpfs directive creates temporary in-memory filesystems for directories that need write access but don't need persistence (temp files, pid files, sockets). This is perfect for security.
read-only file system errors in the logs—mount those directories as tmpfs or volumes as needed.Resource Limits and DoS Prevention
An uncontrolled container can consume all your host's CPU, memory, and disk I/O, crashing everything. Set hard limits:
version: '3.8'
services:
ollama:
image: ollama/ollama:latest
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
user: nobody
volumes:
- ./models:/root/.ollama
tmpfs:
- /tmp
deploy:
resources:
limits:
cpus: '4'
memory: 8G
reservations:
cpus: '2'
memory: 4G
ulimits:
nofile:
soft: 65536
hard: 65536
restart: unless-stopped
immich-server:
image: ghcr.io/immich-app/immich-server:latest
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
user: 1000:1000
volumes:
- ./upload:/usr/src/app/upload
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1G
environment:
DB_HOSTNAME: db
DB_USERNAME: immich
DB_PASSWORD: changeme
DB_DATABASE_NAME: immich
restart: unless-stopped
limits are hard caps—the container gets killed if it exceeds them. reservations are soft limits for orchestration. ulimits control system resource limits like open file descriptors.
Network Isolation with Custom Networks
By default, all containers on the default bridge can reach each other. That's bad. Create explicit networks and attach only services that need to communicate:
version: '3.8'
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
backend:
driver: bridge
ipam:
config:
- subnet: 172.21.0.0/16
isolated:
driver: bridge
ipam:
config:
- subnet: 172.22.0.0/16
services:
caddy:
image: caddy:2-alpine
networks:
- frontend
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy_data:/data
restart: unless-stopped
nextcloud:
image: nextcloud:28-apache
networks:
- frontend
- backend
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
- CHOWN
- DAC_OVERRIDE
- SETFCAP
- SETGID
- SETUID
user: 33:33
volumes:
- ./nc-data:/var/www/html
environment:
MYSQL_HOST: db
MYSQL_USER: nextcloud
MYSQL_PASSWORD: changeme
restart: unless-stopped
depends_on:
- db
db:
image: mariadb:11
networks:
- backend
- isolated
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
user: 999:999
volumes:
- ./db-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: changeme
MYSQL_DATABASE: nextcloud
MYSQL_USER: nextcloud
MYSQL_PASSWORD: changeme
restart: unless-stopped
adminer:
image: adminer:latest
networks:
- isolated
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
user: nobody
restart: unless-stopped
In this setup:
- frontend: Reverse proxy and user-facing services.
- backend: Services that need database access.
- isolated: Only the database and admin tools—not exposed to the internet.
Adminer can't reach Nextcloud (different networks). The reverse proxy can reach Nextcloud but not the database directly. The database is only reachable from Nextcloud and Adminer. This is defense in depth.
Image Scanning and Registry Security
Before pulling an image, know what's in it. I use trivy locally to scan for vulnerabilities:
#!/bin/bash
# Scan all images in docker-compose.yml for vulnerabilities
images=$(grep -oP '^\s+image:\s+\K[^ ]+' docker-compose.yml | sort -u)
for img in $images; do
echo "Scanning $img..."
trivy image --severity HIGH,CRITICAL "$img"
done
Install Trivy from https://github.com/aquasecurity/trivy, then run this before deploying. It won't stop everything bad, but it catches known CVEs in base images and dependencies.
Also: use specific image tags, never latest. latest changes unpredictably and breaks reproducibility. Pin to 3.19-alpine, 28-apache, 1.21, etc.
Logging and Monitoring
A compromised container should be detectable. Configure Docker to log security events:
#!/bin/bash
# Monitor Docker daemon for suspicious activity
docker