Using Docker Compose to Deploy Multi-Container Applications on Your Homelab
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
When I first started self-hosting, I thought running containers one-by-one with docker run commands was fine. It wasn't. Fifteen separate terminal windows, manual volume mappings, broken networking when I rebooted, and forgetting which environment variables I'd set on each service—it was chaos. Docker Compose solved all of that for me in one declarative YAML file, and I've never looked back.
Docker Compose is the gateway drug to serious homelab infrastructure. It lets you define an entire application stack—database, web app, cache, reverse proxy, whatever—in a single file, then spin it all up or tear it down with one command. This tutorial walks you through real-world examples and the practices that actually work.
What Docker Compose Does (and Why You Need It)
Docker Compose is an orchestration tool that reads a YAML file (docker-compose.yml) and manages the full lifecycle of multiple containers as a single unit. Without it, you're running individual containers manually, managing networking yourself, and hoping your volumes survive a reboot.
I use Docker Compose for everything in my homelab—Nextcloud with PostgreSQL, Jellyfin with media volumes, monitoring stacks with Prometheus and Grafana, and reverse proxy setups with Caddy. Each application gets its own compose file (or they share one), and I can reproduce the entire stack on different hardware by just copying the file and running docker-compose up.
Key benefits:
- Declarative: Everything is in one file. No hidden configuration scattered across five shell scripts.
- Reproducible: Same YAML on your dev machine, your NAS, a $40/year VPS from RackNerd—everything works identically.
- Networking built-in: Containers on the same compose network can reach each other by service name. No manual bridge networks.
- Volume persistence: Define volumes once, Docker Compose manages them. Data survives container restarts.
- One command to rule them all: docker-compose up -d, and your entire stack boots. docker-compose down, and it's gone cleanly.
Installation and First Steps
If you have Docker installed, you likely have Docker Compose already. Check:
docker-compose --version
If you're on a recent Linux system and it's missing, install it:
sudo apt-get update && sudo apt-get install docker-compose
On Ubuntu 22.04 and newer, the newer compose plugin (docker compose, no hyphen) is often included. Both work; I'll use the traditional syntax here since it's more portable.
Create a working directory for your first compose application:
mkdir -p ~/homelab/nextcloud
cd ~/homelab/nextcloud
Building Your First Multi-Container Stack: Nextcloud with PostgreSQL
Nextcloud is a perfect real-world example. It needs a web container, a database container, and persistent storage. Here's a complete docker-compose.yml that I actually use:
version: '3.8'
services:
nextcloud:
image: nextcloud:latest
container_name: nextcloud-app
restart: always
ports:
- "8080:80"
environment:
- NEXTCLOUD_ADMIN_USER=admin
- NEXTCLOUD_ADMIN_PASSWORD=changeme123
- POSTGRES_HOST=postgres
- POSTGRES_DB=nextcloud
- POSTGRES_USER=nextcloud
- POSTGRES_PASSWORD=securepass123
volumes:
- nextcloud_data:/var/www/html
- ./config:/var/www/html/config
depends_on:
- postgres
networks:
- nextcloud_net
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/status.php"]
interval: 30s
timeout: 10s
retries: 3
postgres:
image: postgres:15-alpine
container_name: nextcloud-db
restart: always
environment:
- POSTGRES_DB=nextcloud
- POSTGRES_USER=nextcloud
- POSTGRES_PASSWORD=securepass123
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- nextcloud_net
healthcheck:
test: ["CMD-SHELL", "pg_isready -U nextcloud"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
container_name: nextcloud-cache
restart: always
networks:
- nextcloud_net
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 3
volumes:
nextcloud_data:
driver: local
postgres_data:
driver: local
networks:
nextcloud_net:
driver: bridge
Save this as docker-compose.yml in your nextcloud directory. Notice what's happening here:
- Three services: nextcloud (the app), postgres (the database), and redis (caching). They run as separate containers but act as one stack.
- The nextcloud service depends_on postgres, so postgres starts first.
- All three services are on the same nextcloud_net network. The nextcloud container can reach postgres by hostname "postgres" without any manual networking.
- Volumes persist the data even if containers are destroyed.
- Healthchecks let Docker know if services are actually ready, not just running.
- restart: always means if a container crashes, Docker restarts it automatically.
Now deploy it:
docker-compose up -d
Check the logs:
docker-compose logs -f nextcloud
Wait 30-60 seconds for Nextcloud to initialize, then open http://localhost:8080 in your browser. You'll see the Nextcloud setup page. The database is already configured because the app and database containers are talking to each other over the compose network.
To see what's running:
docker-compose ps
To stop everything cleanly:
docker-compose down
To tear everything down including volumes (nuclear option):
docker-compose down -v
Real-World Pattern: Multi-Stack Monitoring Setup
Most homelabs need monitoring. Here's a compose file for Prometheus, Grafana, and Node Exporter that I maintain across three machines:
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: always
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
networks:
- monitoring
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: always
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin123
- GF_SECURITY_ADMIN_USER=admin
volumes:
- grafana_data:/var/lib/grafana
depends_on:
- prometheus
networks:
- monitoring
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: always
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- monitoring
volumes:
prometheus_data:
driver: local
grafana_data:
driver: local
networks:
monitoring:
driver: bridge
The beauty here is that Prometheus automatically discovers Node Exporter on port 9100 (because they're on the same network and I configure prometheus.yml to target it by hostname), and Grafana connects to Prometheus at http://prometheus:9090. No hardcoded IPs, no manual networking.
Environment Variables and .env Files
Never commit secrets to version control. Create a .env file in the same directory as docker-compose.yml:
# .env
NEXTCLOUD_ADMIN_PASSWORD=reallysecurepassword
POSTGRES_PASSWORD=anothersecurepassword
COMPOSE_PROJECT_NAME=nextcloud-prod
Then reference in docker-compose.yml:
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
Add .env to .gitignore if you're tracking this in git. Docker Compose reads .env automatically; no extra configuration needed.
Updating Containers and Versioning
I prefer pinning image versions instead of using "latest." This prevents surprise breakage:
services:
nextcloud:
image: nextcloud:28.0.1 # Specific version, not latest
postgres:
image: postgres:15.4-alpine # Specific version
When you want to upgrade, change the version and run:
docker-compose down
docker-compose pull
docker-compose up -d
For critical services, test upgrades on a spare machine or VPS first. A $40/year RackNerd VPS is perfect for staging your homelab changes before running them on production hardware.
Common Gotchas and Fixes
Port already in use: If port 8080 is taken, change the mapping in compose: ports: - "8081:80" instead. The first number is your machine; the second is the container.
Containers can't reach each other: Make sure they're on the same network. networks: - service_name: mynetwork in each service block.
Volumes not persisting: Check ownership. If Docker runs as root and your app runs as a different user inside the container, permission errors happen. Use volumes: and let Docker handle it, or set proper ownership after first run: docker exec container_name chown -R appuser:appgroup /path
Compose file syntax errors: YAML is picky about indentation. Use a linter: yamllint docker-compose.yml. Or just paste it into an online YAML validator.
Next Steps: Scale Your Homelab
Once you're comfortable with Docker Compose, you can:
- Use Portainer as a web UI to manage compose stacks visually.
- Add Watchtower to automatically update container images on a schedule.
- Organize multiple compose files in a git repo so your entire homelab is version-controlled and reproducible.
- Integrate with a reverse proxy (Caddy, Traefik) to expose services securely over HTTPS with a single domain.
Docker Compose is the foundation of everything I run. Once you write your first compose file, you'll never go back to managing containers individually. Keep your compose files simple, use version control, and your homelab becomes bulletproof.
Discussion