Docker Security Best Practices

Docker Security Best Practices: Protecting Your Homelab from Container Vulnerabilities

When I first started self-hosting, I treated Docker like a magic sandbox—throw an app in a container and it's secure, right? Wrong. I learned the hard way that containers are only as secure as the images you pull, the networks you create, and the runtime policies you enforce. Over the past year, I've hardened my homelab against container vulnerabilities and I want to share what actually works. This isn't theoretical security theater; these are practices I implement on production systems running Nextcloud, Ollama, Jellyfin, and other critical services.

Image Scanning and Vulnerability Management

The first line of defense is knowing what's actually inside your images. I used to just pull from Docker Hub and assume official images were safe. That assumption cost me. Official doesn't mean vulnerability-free. Now I scan every image before deployment, and I recommend you do the same.

Trivy is my go-to tool—it's fast, accurate, and catches both OS-level and application vulnerabilities. I run it as part of my build pipeline. Here's how:

#!/bin/bash
# Install Trivy (one-time)
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin

# Scan an image before running it
trivy image --severity HIGH,CRITICAL nginx:latest

# Scan and fail the build if critical vulns found
trivy image --exit-code 1 --severity CRITICAL nginx:latest

When Trivy finds vulnerabilities, don't panic—understand them. A high-severity CVE in a package you never use is different from one in OpenSSL. I maintain a spreadsheet tracking which vulns are acceptable risk in my homelab. For production-grade setups, I use base images from projects that update frequently. Alpine Linux is smaller but sometimes slower to patch; Debian releases updates more reliably. I prefer Debian for security-critical services like Vaultwarden.

Tip: Set up a scheduled job to scan your running images weekly. Use Watchtower with notifications to alert you when base image updates are available, then rebuild your containers with docker-compose build --no-cache and redeploy.

Least Privilege: Users, Capabilities, and Rootless Mode

Running containers as root is a critical vulnerability. If an attacker compromises the application, they own your entire system. I never run containers as root unless absolutely necessary—and I've found that "absolutely necessary" applies to maybe 1% of self-hosted apps.

In your Dockerfile, create a non-root user:

FROM debian:bookworm-slim

# Install app dependencies
RUN apt-get update && apt-get install -y curl

# Create unprivileged user
RUN useradd -m -s /sbin/nologin appuser

# Copy app and set ownership
COPY --chown=appuser:appuser ./app /app
WORKDIR /app

# Switch to non-root user
USER appuser

CMD ["./myapp"]

In your compose file, enforce additional hardening:

version: '3.8'
services:
  myapp:
    image: myapp:latest
    user: "1000:1000"  # Explicit UID:GID (not root)
    cap_drop:
      - ALL  # Drop all capabilities by default
    cap_add:
      - NET_BIND_SERVICE  # Add back only what's needed
    read_only: true  # Filesystem is read-only
    tmpfs:
      - /tmp  # Give writable /tmp only
      - /var/tmp
    security_opt:
      - no-new-privileges:true  # Prevent escalation
    networks:
      - internal
    expose:
      - 8080

networks:
  internal:
    driver: bridge

I also enable rootless Docker on my VPS. It's more complex to set up but eliminates the biggest attack surface—the Docker daemon itself running as root. On my Hetzner VM, I installed rootless Docker and configured systemd to start it per-user. It adds overhead but for a small homelab, it's worth the peace of mind.

Network Isolation and Segmentation

Docker's default bridge network broadcasts traffic to all containers on the host. I assume nothing is trustworthy. I create separate networks for different services and use explicit service-to-service connections only where needed.

My homelab runs roughly this topology:

Caddy connects only to the frontend and internal networks. Ollama talks only to Nextcloud. Vaultwarden never touches anything else. If Nextcloud is compromised, the attacker can't immediately reach Vaultwarden. This takes minutes to implement but defeats lateral movement attacks.

I also run network policies to restrict egress. Most of my self-hosted apps don't need outbound internet. Disabling it prevents exfiltration and command-and-control callbacks. Use UFW on the host to restrict container outbound traffic by default, then whitelist specific IPs or domains as needed.

Watch out: Docker containers can bypass UFW rules if they're attached to the Docker bridge. Use iptables directly or enable userland-proxy to enforce rules properly: add "userland-proxy": false to your /etc/docker/daemon.json.

Secrets Management: Never Hardcode Credentials

Embedding database passwords or API keys in Dockerfiles is asking to be compromised. I use Docker secrets (on Swarm) or environment files with restricted permissions. For homelabs, the simplest approach is external secret management with proper file permissions.

Create a secrets directory on the host:

mkdir -p /etc/docker-secrets
chmod 700 /etc/docker-secrets
echo "my-super-secret-db-password" > /etc/docker-secrets/db_password
chmod 600 /etc/docker-secrets/db_password

Reference in compose:

version: '3.8'
services:
  postgres:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password
    volumes:
      - postgres_data:/var/lib/postgresql/data

secrets:
  db_password:
    file: /etc/docker-secrets/db_password

volumes:
  postgres_data:

Never commit secrets to git. I use .gitignore to exclude .env files and secret directories. For more sophisticated deployments, I've integrated Vaultwarden's API into my deployment pipeline so credentials are injected at runtime, never at rest on disk.

Runtime Monitoring and Log Auditing

Security doesn't end at deployment. I monitor container behavior continuously. Prometheus scrapes Docker metrics, and I alert on anomalies: sudden memory spikes, unexpected network connections, or process spawning.

I also centralize logs. Docker's json-file logging driver writes to disk; I use the splunk or syslog driver to ship logs to a central location. Nextcloud and Ollama are easier to audit when I can query 30 days of logs without SSH-ing to the host.

Enable audit logging in your Docker daemon config:

{
  "userland-proxy": false,
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "10"
  },
  "icc": false,
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 65536,
      "Soft": 65536
    }
  }
}

Restart the daemon and verify: docker info | grep -A5 "Logging Driver"

Image Updates and Patch Management

Vulnerability disclosures happen weekly. I automate patching with Watchtower and coordinate updates during maintenance windows. On my VPS (which I rent from RackNerd—great pricing for modest homelab compute), I run Watchtower with a 3am cron schedule. It checks for updated images, rebuilds containers, and restarts them automatically.

For critical services, I test updates in staging first. For less critical services like Jellyfin, I tolerate brief downtime and update aggressively.

Conclusion: Security as Ongoing Process

Docker security in a homelab isn't a checklist you complete once. It's a continuous practice: scan images regularly, keep base images updated, minimize privileges, isolate networks, and audit logs. The steps I've covered—scanning, rootless users, network policies, secrets management—catch 95% of common container attacks.

Start with image scanning using Trivy this week. Next month, refactor one critical service to run as a non-root user. By next quarter, your homelab will be significantly harder to compromise. That's how I've built my infrastructure, and it's proven reliable.

Discussion

```