Why Docker Alone Isn't Enough: Planning Your Self-Hosted Stack
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
When I first ran Nextcloud in a Docker container, I thought I'd solved everything. One command, one container, one app—problem solved. Then reality hit. The container crashed at 3 AM. My backups didn't exist. I had no way to access it from outside my home network. I realized Docker handles containerization, not infrastructure.
Docker is the foundation of modern self-hosting, but it's only one piece of the puzzle. A production-grade self-hosted stack needs reverse proxies, persistent storage, backup strategies, monitoring, networking infrastructure, and security hardening. This article walks you through the gaps Docker leaves and how to fill them.
Docker's Blind Spots
Docker containers are ephemeral by design. They're meant to be disposable, scalable, and isolated. But self-hosting is not cloud infrastructure—it's your data, your privacy, your responsibility.
What Docker handles well: Application isolation, versioning, reproducible deployments, multi-container orchestration with Docker Compose.
What Docker doesn't handle:
- Exposing services securely to the internet (you need a reverse proxy)
- Persistent data across container restarts (you need volume management strategy)
- Backup and recovery of your data (Docker has no built-in backup)
- Monitoring container health and alerting you when something fails (you need monitoring tools)
- Automatic certificate renewal and SSL/TLS termination (you need Caddy or Traefik)
- Network segmentation and firewall rules (you need host-level firewalling)
- Single sign-on across multiple services (you need an auth layer like Authelia)
- Automatic container updates (you need Watchtower or manual updates)
I've deployed containers that worked perfectly locally but failed in production because I forgot about these layers. The frustration pushed me to build a proper stack architecture.
The Complete Self-Hosted Stack: What You Actually Need
Think of a self-hosted infrastructure like a building. Docker is the apartment units, but the building needs a foundation, a roof, plumbing, electricity, security, and maintenance schedules.
Layer 1: Compute & Hosting
Start with reliable hardware. A reputable VPS provider works for most people. For around $40/year, you can get a solid VPS from providers like RackNerd that gives you 2-4 vCPUs, 4-8GB RAM, and 100GB storage. Check their seasonal deals—I've deployed multiple services on their entry-tier boxes without issues.
Alternatively, run on home hardware: an old laptop, a Raspberry Pi cluster, or a small NUC. The tradeoff is electricity costs, uptime responsibility, and network complications. I prefer a cheap VPS for reliability.
Layer 2: Reverse Proxy & SSL/TLS
This is non-negotiable. You cannot safely expose services to the internet with bare container ports. I use Caddy because it handles SSL certificate renewal automatically, supports dynamic configuration, and integrates beautifully with Docker labels.
Traefik and Nginx are equally valid choices. The job is the same: terminate SSL, route requests to containers based on hostname, and handle certificate lifecycle.
docker run -d \
--name caddy \
--restart unless-stopped \
-p 80:80 \
-p 443:443 \
-v caddy-data:/data \
-v /home/user/Caddyfile:/etc/caddy/Caddyfile:ro \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
caddy:latest
Without a reverse proxy, you're either:
- Exposing Docker directly to the internet (massive security risk)
- Manually managing SSL certificates with cron jobs (fragile)
- Running everything on alternate ports like 8080 (ugly and breaks some apps)
Layer 3: Persistent Storage & Backups
Docker volumes are not backups. They're just directories. Your data lives on the host filesystem, which can fail. I learned this the hard way when a drive died and I lost a month of configurations.
Implement a backup strategy before disaster strikes:
#!/bin/bash
# Daily backup script for self-hosted services
BACKUP_DIR="/mnt/backups"
DATE=$(date +%Y-%m-%d)
# Backup Docker volumes
docker run --rm \
-v nextcloud-data:/data \
-v $BACKUP_DIR:/backup \
alpine tar czf /backup/nextcloud-data-$DATE.tar.gz -C /data .
# Backup database
docker exec mariadb mysqldump --all-databases --single-transaction > $BACKUP_DIR/db-$DATE.sql
# Cleanup old backups (keep 30 days)
find $BACKUP_DIR -name "*.tar.gz" -mtime +30 -delete
find $BACKUP_DIR -name "*.sql" -mtime +30 -delete
echo "Backup completed at $(date)" >> /var/log/backup.log
I use a combination of daily incremental backups to a networked storage and monthly full backups to cloud storage. This costs almost nothing and has saved me multiple times.
Layer 4: Monitoring & Alerting
If a container crashes at 3 AM and you don't know, is it really running? Monitoring tells you when things go wrong before you discover it by accident.
Use Prometheus + Grafana for metrics, or simpler tools like Uptime Kuma for just checking "is it alive?" I prefer Uptime Kuma for self-hosted because it's lightweight, self-contained in Docker, and gives me instant alerts when services go down.
Add health checks to your Docker Compose:
version: '3.8'
services:
nextcloud:
image: nextcloud:apache
container_name: nextcloud
restart: unless-stopped
ports:
- "127.0.0.1:8080:80"
volumes:
- nextcloud-data:/var/www/html
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/status.php"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
environment:
- MYSQL_HOST=db
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=securepass
db:
image: mariadb:latest
container_name: nextcloud-db
restart: unless-stopped
volumes:
- db-data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 30s
timeout: 10s
retries: 3
environment:
- MYSQL_ROOT_PASSWORD=rootpass
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=securepass
volumes:
nextcloud-data:
db-data:
Layer 5: Networking & Security
Docker networks are isolated by default, which is good. But you still need host-level firewalling. I use UFW on Linux to block everything except SSH, HTTP, and HTTPS:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp comment 'SSH'
sudo ufw allow 80/tcp comment 'HTTP'
sudo ufw allow 443/tcp comment 'HTTPS'
sudo ufw enable
Also consider network isolation between containers. Separate your database from your web frontend with internal Docker networks. Use Tailscale or WireGuard for remote access instead of exposing services directly.
Layer 6: Secrets Management
Never put passwords in docker-compose.yml. Use Docker secrets or environment files with proper permissions:
# Create .env file
echo "MYSQL_PASSWORD=$(openssl rand -base64 32)" > .env
chmod 600 .env
# In docker-compose.yml:
# env_file: .env
Layer 7: Updates & Maintenance
Watchtower can automatically update your containers, but I prefer manual updates for stability. Set a weekly maintenance window:
# Weekly update script
docker-compose pull
docker-compose up -d
docker system prune -f
Bringing It Together: A Real Example
Here's my actual deployment structure for a self-hosted media server + file sync + password manager:
~/services/
├── docker-compose.yml # All containers defined here
├── .env # Secrets (gitignored)
├── Caddyfile # Reverse proxy config
├── backups/
│ └── daily-backup.sh # Backup script (cron'd)
├── monitoring/
│ ├── prometheus.yml
│ └── grafana-dashboards/
└── logs/
└── backup.log
This structure keeps everything organized. Deployments are reproducible. Backups happen automatically. Monitoring alerts wake me up only when needed.
The Economics: Is It Worth It?
A minimal self-hosted stack costs:
- VPS: ~$40/year (RackNerd, Hetzner, or equivalent)
- Domain name: ~$10/year
- Optional: NAS for backups (~$300-500 one-time)
- Time: 10-20 hours to set up properly
Compare that to SaaS subscriptions: Nextcloud alone ($99/year), Bitwarden ($10/year), Jellyfin (free but costs storage), photo sync services ($120/year). Self-hosting breaks even in the first year and pays dividends forever.
Common Mistakes I Made
Mistake 1: Running everything on one database. A crash in one service brought down everything. Use separate databases per major service.
Mistake 2: No health checks. Containers zombie'd silently. Health checks force Docker to restart dead services.
Mistake 3: Storing secrets in git. I almost committed API keys. Use .gitignore strictly and consider git-crypt.
Mistake 4: Not documenting deployments. Six months later, I forgot why a service needed certain volume mounts. Keep a README in your services folder.
Mistake 5: Assuming Docker handles updates. Container images with :latest tags can break things. Use specific versions and test updates first.
Next Steps
If you're running Docker without a reverse proxy, deploy Caddy this week. If you have no backups, create a backup script today. If you don't monitor anything, set up Uptime Kuma tomorrow.
Docker is the engine, but infrastructure is the car. Build it properly from the start, and you'll have something reliable that runs for years without drama.
Discussion