Docker Networking: Connecting Containers Securely in Your Homelab
When I first started running multiple containers in my homelab, I made a common mistake: I threw everything on the default bridge network and exposed ports everywhere. Containers could reach each other indiscriminately, and anyone with network access could probe my services. Then I learned about Docker's custom networking, and it changed how I think about container security. In this guide, I'll show you how to architect secure container networks, isolate sensitive workloads, and implement proper inter-container communication patterns.
Why Docker's Default Bridge Network Isn't Secure Enough
Docker creates a default bridge network automatically. All containers connected to it can reach each other by IP address, but crucially, DNS name resolution only works for explicit links—not natural service discovery. This matters in a homelab where you're running Nextcloud, Jellyfin, Vaultwarden, and a dozen other apps.
The default network has two real problems: first, there's no isolation between unrelated services, so a compromised web app can easily lateral-move to your database. Second, every container gets a dynamic IP that changes on restart, making static connections fragile. I started using user-defined bridge networks, and suddenly I had both security and stability.
When you create your own bridge networks, Docker's embedded DNS server kicks in automatically. This means you can reference containers by name—curl http://postgres:5432 works without any links configuration. Equally important, containers on different networks cannot reach each other at all unless you explicitly connect them.
Creating Segmented Networks for Your Homelab
I structure my homelab with three main networks: one for front-end services (reverse proxies, web apps), one for stateful backends (databases, caches), and one for utilities (backups, monitoring). This way, even if my web app is compromised, it cannot talk directly to my database.
Here's a practical docker-compose setup using multiple networks:
version: '3.8'
services:
# Frontend tier
caddy:
image: caddy:2.7-alpine
container_name: caddy_proxy
networks:
- frontend
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
restart: unless-stopped
nextcloud:
image: nextcloud:28-apache
container_name: nextcloud_app
networks:
- frontend
- backend
depends_on:
- postgres
environment:
POSTGRES_HOST: postgres
POSTGRES_DB: nextcloud
POSTGRES_USER: nextcloud
POSTGRES_PASSWORD: ${DB_PASS}
restart: unless-stopped
# Backend tier
postgres:
image: postgres:16-alpine
container_name: postgres_db
networks:
- backend
environment:
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_USER: nextcloud
POSTGRES_DB: nextcloud
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: redis_cache
networks:
- backend
restart: unless-stopped
networks:
frontend:
driver: bridge
backend:
driver: bridge
volumes:
caddy_data:
caddy_config:
postgres_data:
In this setup, Caddy and Nextcloud live on the frontend network. Nextcloud also connects to backend so it can reach Postgres and Redis, but Caddy cannot. This means if your web app is exploited, the attacker cannot directly query your database—they'd need to compromise the app first and steal credentials.
.env file (never commit to git) and reference it with docker-compose --env-file .env up -d. This prevents hardcoding secrets in your compose files.Advanced: Overlay Networks and Host Access Control
For homelabs spanning multiple machines (or if you plan to scale later), overlay networks provide encrypted communication between hosts. I use this when running a cluster of services across my main server and a NUC in the corner.
First, initialize Docker Swarm mode on your primary host:
docker swarm init
# Output includes a join token. On secondary hosts, run:
docker swarm join --token SWMTKN-1-... 192.168.1.100:2377
Then create an overlay network:
docker network create \
--driver overlay \
--opt encrypted \
--subnet 10.0.9.0/24 \
secure_backend
# Verify it exists and is encrypted
docker network ls
docker network inspect secure_backend
The --opt encrypted flag enables IPSec encryption between nodes. Any service connected to this overlay network can communicate across your physical machines while remaining encrypted. In production homelabs, I use this for database replication and backup traffic.
DNS and Service Discovery Without Links
One breakthrough for me was realizing I didn't need the deprecated links syntax. Docker's embedded DNS resolver handles everything when containers are on the same custom network. Here's what works:
container_name: postgreson a custom network → accessible aspostgresfrom any container on that network- Service names in compose → accessible as
service_name(e.g.,redisif your service is namedredis:) - Network aliases for multiple names on the same container
To give a container multiple DNS names, use aliases:
services:
postgres:
image: postgres:16-alpine
networks:
backend:
aliases:
- db
- database
- postgres-primary
Now your app can reach this container using db, database, or postgres-primary—all resolve to the same container without restarts.
Restricting Outbound Traffic: egress-rules and iptables
Custom networks give you lateral isolation, but by default, containers can still reach the outside world (and each other's exposed ports). To lock this down further, I use UFW on the host and firewall settings within Compose.
First, prevent all inter-container communication by default and allow only what you need. This is done with network-level policies on the Docker daemon. Edit /etc/docker/daemon.json:
{
"icc": false,
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 2048,
"Soft": 1024
}
}
}
Setting "icc": false disables inter-container communication by default. You'll need to explicitly allow it within networks. Reload Docker:
sudo systemctl restart docker
On the host level, use UFW to restrict container access. I allow only Caddy's ports (80, 443) inbound, and block everything else:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
This ensures only Caddy endpoints are reachable from outside. Other services (Nextcloud's direct port, Jellyfin's web UI) are only accessible through the reverse proxy.
Monitoring Container Networking with NetCat and tcpdump
When debugging connectivity, I use a few quick commands. First, verify DNS resolution inside a container:
docker exec -it nextcloud_app nslookup postgres
# Should return the postgres container's IP on the backend network
docker exec -it nextcloud_app ping redis
# Should work if nextcloud is on the backend network
To check which networks a container is connected to:
docker inspect nextcloud_app | jq '.NetworkSettings.Networks'
# Output shows all networks and their IP addresses
If a container can't reach another, check that they're on the same network, and verify no firewall rules are blocking it:
docker exec -it nextcloud_app nc -zv postgres 5432
# nc = netcat; -z = scan, -v = verbose
# Should say "succeeded" if the port is open
Best Practices I Follow in My Homelab
Principle of least privilege: Every container gets only the networks it needs. Caddy touches frontend and maybe a monitoring network. Databases touch only backend and never the internet.
Secrets management: Never hardcode API keys or passwords. Use environment files, Docker secrets (in Swarm mode), or a tool like Vault for serious setups.
Read-only filesystems where possible: Add read_only: true to your compose file for containers that don't need to write. This hardens them against injection attacks.
Non-root users: Most official images run as root. Use user: 1000:1000 in compose to drop privileges. This limits damage if a container is compromised.
Restart policies: Set restart: unless-stopped so your stack survives reboots, but you can still stop services manually for maintenance.
Next Steps
Now that your containers are isolated and communicating securely over custom networks, the natural next step is to implement a reverse proxy (Caddy or Traefik) to manage TLS certificates and route traffic. I've covered this in detail in my Traefik and SSL setup guide.
If you're running this on a VPS rather than a local homelab, check out RackNerd's KVM VPS options—they're affordable enough to host your entire Docker stack with room to experiment. I've tested them for self-hosting workloads, and the network isolation features I've described here work identically.
Finally, layer in monitoring: tools like Prometheus and Grafana (also containerized) give you visibility into container behavior and network usage. That's a conversation for another tutorial, but the foundation you build here makes monitoring trivial to add.
Discussion