Docker Compose for Multi-Container Applications: Complete Guide
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
Docker Compose transforms the chaos of managing multiple containers into something actually manageable. When I first started self-hosting, I'd spin up containers one by one with long command lines, environment variables scattered everywhere, and no way to reproducibly restart the whole stack. Docker Compose changed that. It's the difference between remembering to pass three network flags, two volume mounts, and four environment variables versus writing it once in YAML and never thinking about it again.
What Docker Compose Actually Does
Docker Compose isn't magic—it's a declarative orchestration tool that reads a YAML file and turns it into running containers. You define your entire stack—web app, database, cache, reverse proxy—in one file, and Compose handles networking, volume mounting, environment variables, and startup order. When something breaks, you're not hunting through bash history; you're reading the same file you wrote three months ago.
I prefer Docker Compose for homelab and small VPS deployments (you can get a decent VPS for around $40/year from providers like RackNerd, which gives you a real machine to host your stack). For anything larger, I'd eventually migrate to Kubernetes, but Compose scales well enough for most self-hosted workloads: a Nextcloud instance, Vaultwarden, Jellyfin, monitoring stack, reverse proxy—all orchestrated together.
Setting Up Your First Docker Compose File
The foundation is a docker-compose.yml file. Version matters, but I recommend v3.8 for most homelabs—it's stable and widely compatible. Here's a real-world example: a Nextcloud stack with MariaDB, Redis cache, and Nginx reverse proxy.
version: '3.8'
services:
nextcloud:
image: nextcloud:latest
container_name: nextcloud
restart: unless-stopped
ports:
- "8080:80"
environment:
- MYSQL_HOST=db
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=secure_password_here
- MYSQL_DATABASE=nextcloud
- REDIS_HOST=redis
volumes:
- nextcloud_data:/var/www/html
- ./config:/var/www/html/config
depends_on:
- db
- redis
networks:
- internal
db:
image: mariadb:latest
container_name: nextcloud_db
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=root_password_here
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=secure_password_here
volumes:
- db_data:/var/lib/mysql
networks:
- internal
redis:
image: redis:7-alpine
container_name: nextcloud_cache
restart: unless-stopped
networks:
- internal
nginx:
image: nginx:alpine
container_name: nextcloud_proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
depends_on:
- nextcloud
networks:
- internal
volumes:
nextcloud_data:
db_data:
networks:
internal:
driver: bridge
.env file instead: create a file called .env in the same directory, add DB_PASSWORD=your_secret, then reference it in the compose file as ${DB_PASSWORD}. Add .env to .gitignore if you're version-controlling this.Networking: How Containers Talk to Each Other
One of Compose's best features is automatic networking. When I define depends_on: - db in the Nextcloud service, Compose creates a DNS entry so Nextcloud can reach the database at hostname db. I don't need to know the container's IP address or pass it manually.
In the example above, I created an explicit internalinternal network but your database only on a private backend network—attackers can't reach the database even if they somehow compromise the proxy.
The ports section exposes ports to your host machine. Notice Nextcloud runs on port 8080 but the Nginx proxy is on 80/443—that's intentional. You'd access Nextcloud through Nginx, not directly. This is how you add security and reverse proxy logic to your stack.
Volumes: Persistence and Data Management
Containers are ephemeral. When a container stops, everything in its root filesystem goes away. Volumes persist data. In my example, I have two named volumes: nextcloud_data stores the actual files, and db_data stores the database. Compose creates these automatically on first run and reattaches them on subsequent runs.
I also use bind mounts for configuration files: ./config:/var/www/html/config mounts a local directory into the container. This lets me edit Nextcloud's configuration without entering the container or rebuilding images. That's critical for rapid iteration in a homelab.
When you run docker compose down, volumes persist by default. The data survives. This is what you want—you can destroy and recreate your containers without losing anything. But be careful: if you run docker compose down --volumes, those named volumes disappear forever. I learned that the hard way once.
Environment Variables and Configuration
I already mentioned the .env file, but let me show the pattern more clearly:
# .env file
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_ADMIN_PASSWORD=super_secret_password
MYSQL_ROOT_PASSWORD=root_secret
MYSQL_PASSWORD=nextcloud_secret
REDIS_PASSWORD=redis_secret
TIMEZONE=America/Denver
DOMAIN=nextcloud.example.com
Then in your compose file, reference them:
environment:
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER}
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
- TZ=${TIMEZONE}
Run docker compose config to see the final resolved file with all variables substituted. This is invaluable for debugging "why isn't my container starting?"
Startup Order and Dependencies
The depends_on directive ensures containers start in the right order. But here's the gotcha: Compose waits for the container to start, not for the service inside to be ready. Your Nextcloud container might start before MariaDB has actually initialized the database. The connection fails, and Nextcloud crashes.
The proper fix is to add a health check to your database and make dependent services wait for it. Or use a simple entrypoint script that retries the connection. I prefer the health check approach:
db:
image: mariadb:latest
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
start_period: 40s
Then in the Nextcloud service:
depends_on:
db:
condition: service_healthy
Common Operations and Useful Commands
Once your compose file is solid, you'll run these commands repeatedly:
docker compose up -d — Start all services in detached mode (background). The first time, it pulls images, creates volumes, and starts everything. Subsequent times, it's near-instantaneous if images are cached.
docker compose ps — Show running containers, their status, and mapped ports. When something isn't working, this is your first diagnostic step.
docker compose logs -f nextcloud — Stream logs from the Nextcloud service. The -f flag follows new logs in real-time. Invaluable for debugging startup issues or application errors.
docker compose exec db mysql -u nextcloud -p nextcloud — Execute a command inside a running container. Here, I'm opening a MySQL shell. No need to SSH into the container; Compose handles it.
docker compose down — Stop and remove all containers (but preserve volumes). Use this when you're done or need to restart from a clean slate.
docker compose pull && docker compose up -d — Update all images to their latest versions and restart. Use this periodically to pull security updates, though you might want to pin specific versions in production.
Scaling and Production Considerations
Docker Compose scales to about 10–20 services before it becomes unwieldy. Beyond that, you're looking at Kubernetes or other orchestrators. For a homelab, that's rarely a problem. Most people run 5–10 services: a reverse proxy, monitoring stack, file storage, maybe a media server, and a few small apps.
For production on a VPS, apply these patterns: always use explicit service names (no `container_name` in prod, let Compose name them), pin image versions (not `latest`), use separate compose files for different environments (docker-compose.prod.yml), and back up your volumes regularly. If your VPS goes down, you want to restore from backup, not rebuild everything from scratch.
I also recommend using Watchtower to automate container updates, but that's a separate discussion. For now, know that docker compose pull is your manual update path.
Debugging and Troubleshooting
When a service won't start, run docker compose logs service_name. The error messages are usually clear: "connection refused" means the database isn't ready yet, "no such image" means the image tag doesn't exist, "port already in use" means something else is on that port.
If you're inside a running container and need to test connectivity, many images don't have ping or curl. Use docker compose exec service_name ping other_service from your host instead. Or add debugging tools to your Compose file temporarily with a `debug` service that sleeps forever:
debug:
image: busybox
command: sleep infinity
Then docker compose exec debug ping db to test network connectivity without modifying your actual services.
Next Steps: From Compose to Production
Once you're comfortable with Docker Compose, you're ready to deploy real services. Many of CompactHost's tutorials use Compose as the foundation—Nextcloud, Vaultwarden, Jellyfin, Immich, all deployed the same way. Master this pattern, and you've unlocked self-hosting.
Start with something simple: a reverse proxy, one app, and a database. Get comfortable with the commands, understand the networking, and practice recovering from accidental docker compose down --volumes. Then add complexity gradually. Your future self will thank you when you're debugging a production issue at 2 AM and can read your entire stack from a single well-organized file.
Discussion