Docker Compose Networking Deep Dive: Bridging, Host, and Overlay Networks Explained
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
Docker networking is one of those topics that seems simple until you have four services refusing to talk to each other at 11pm. I've been there — Nginx returning 502s, a database that's "up" but unreachable, and a compose file that I swear worked yesterday. Once you actually understand what's happening under the hood with bridge, host, and overlay networks, everything clicks into place and those midnight debugging sessions become much shorter.
In this tutorial I'm going to walk through each Docker network driver that matters for self-hosting — custom bridge networks, host mode, and overlay for multi-host setups — with real compose file examples, the specific commands I use to inspect and debug them, and the gotchas that cost me hours before I figured them out.
The Default Bridge Network: Why You Should Stop Using It
When you run docker compose up without defining any networks, Docker creates a default bridge network and attaches all your services to it. This sounds convenient, but the default bridge has a critical limitation: containers cannot resolve each other by service name. They can only communicate via IP address, which is dynamically assigned and changes on restart.
I used to work around this by hardcoding IPs or using --link flags. Don't do that. The right answer is to define a custom bridge network in your compose file. Custom bridges give you automatic DNS resolution — a container named app can reach a container named db just by using the hostname db.
Here's a practical example: a web app with Nginx, a Python API, and a Postgres database, all on a named custom bridge:
networks:
backend:
driver: bridge
frontend:
driver: bridge
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
networks:
- frontend
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
api:
image: myapp/api:latest
networks:
- frontend
- backend
environment:
- DB_HOST=postgres
- DB_PORT=5432
depends_on:
- postgres
postgres:
image: postgres:16
networks:
- backend
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_DB=appdb
- POSTGRES_USER=appuser
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
volumes:
pgdata:
Notice that Nginx and the API share the frontend network, the API and Postgres share the backend network, and Postgres has no access to the frontend network at all. This is proper network segmentation — your database is completely unreachable from the Nginx layer. The API can reach Postgres at the hostname postgres because that's its service name. This is one of my favourite features of custom bridges.
docker network inspect <network_name> to see which containers are attached to a network, their assigned IPs, and the subnet in use. This is the first command I run when debugging connectivity issues. Pair it with docker compose exec api ping postgres to verify DNS resolution is working from inside a running container.Configuring Subnet and Gateway for Custom Bridge Networks
By default Docker picks subnets from the 172.16.0.0/12 range. This usually works fine, but in a homelab with real VLANs or a VPN like Tailscale or WireGuard, you can end up with subnet collisions that kill routing. I hit this exact issue when my WireGuard tunnel started dropping traffic because Docker had claimed 172.20.0.0/16 — the same block my VPN used.
You can fix this by explicitly defining your subnets in the compose file:
networks:
backend:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.10.1.0/24
gateway: 10.10.1.1
frontend:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.10.2.0/24
gateway: 10.10.2.1
Pick subnets that don't overlap with your LAN (192.168.x.x typically), your VPN, or other Docker compose stacks. I've standardised on 10.10.x.0/24 blocks for my homelab Docker networks, incrementing the third octet per project. It makes docker network ls output much easier to read at a glance.
You can also set internal: true on a network to prevent any external traffic from reaching containers on it at all — useful for pure backend networks that should never accept inbound connections:
networks:
backend:
driver: bridge
internal: true
ipam:
driver: default
config:
- subnet: 10.10.1.0/24
With internal: true, containers on that network can still talk to each other, but they have no internet access and no external port mapping can reach them. I use this for Redis caches, internal message queues, and databases.
Host Network Mode: Maximum Performance, Minimum Isolation
The host network driver removes all network namespace isolation between the container and the host. The container shares the host's network stack directly — same IP, same ports, no NAT. This eliminates the performance overhead of bridge networking and is useful for specific workloads.
I use host mode in two situations: when I'm running something performance-sensitive that hammers the network (like Ollama serving a local LLM to multiple clients), or when a service needs to discover other hosts via broadcast/multicast (like some IoT or UPnP scenarios).
services:
ollama:
image: ollama/ollama:latest
network_mode: host
volumes:
- ollama_models:/root/.ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
volumes:
ollama_models:
With host networking, Ollama listens on port 11434 directly on the host — no port mapping needed or possible. You access it at http://localhost:11434 or from your LAN at http://your-host-ip:11434.
Also be aware that with host mode, port conflicts become your problem. If you have anything else on the host listening on port 11434, your container will fail to bind. With bridge networking, Docker handles the NAT and you only need to worry about the published host port — a much smaller surface area.
Overlay Networks: Multi-Host Communication with Docker Swarm
Overlay networks span multiple Docker hosts. They're the networking layer for Docker Swarm and let containers on different physical or virtual machines communicate as if they were on the same network. The driver encapsulates traffic using VXLAN tunnels over port 4789/UDP, with Swarm management traffic on 2377/TCP and node discovery on 7946/TCP+UDP.
Overlay networks require either Docker Swarm mode or an external key-value store (older approach). With Swarm, you initialise the manager node and then workers join. Your compose file becomes a stack file deployed with docker stack deploy:
# First, on your manager node:
# docker swarm init --advertise-addr YOUR_HOST_IP
# Then on worker nodes (use the token from swarm init output):
# docker swarm join --token SWMTKN-... MANAGER_IP:2377
# stack.yml — deploy with: docker stack deploy -c stack.yml myapp
version: "3.9"
networks:
app_overlay:
driver: overlay
attachable: true
ipam:
config:
- subnet: 10.20.0.0/24
services:
web:
image: nginx:alpine
ports:
- target: 80
published: 80
protocol: tcp
mode: ingress
networks:
- app_overlay
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
api:
image: myapp/api:latest
networks:
- app_overlay
deploy:
replicas: 2
environment:
- DB_HOST=postgres
- DB_PORT=5432
postgres:
image: postgres:16
networks:
- app_overlay
volumes:
- pgdata:/var/lib/postgresql/data
deploy:
placement:
constraints:
- node.role == manager
environment:
- POSTGRES_DB=appdb
volumes:
pgdata:
The attachable: true flag on the overlay network lets standalone containers (not just swarm services) attach to it, which is useful for one-off debugging containers. The mode: ingress on the web port uses Swarm's built-in load balancer — any node in the cluster will accept traffic on port 80 and route it to one of the three web replicas, regardless of which node they're running on.
For most single-server homelab setups, you won't need overlay networks. But if you're running two or three nodes — say, a Hetzner Dedicated server plus a couple of DigitalOcean Droplets for redundancy — overlay is the right tool. It genuinely is as simple as it looks once Swarm is initialised.
Connecting Services Across Separate Compose Stacks
One situation that trips people up is connecting services from two different compose projects — for example, a reverse proxy stack and an app stack. By default, compose projects are isolated. The fix is to create an external network that both stacks attach to.
# Create the shared network once (or have one stack create it):
# docker network create proxy_net
# In your Caddy/Traefik proxy stack (docker-compose.yml):
networks:
proxy_net:
external: true
services:
caddy:
image: caddy:alpine
ports:
- "80:80"
- "443:443"
networks:
- proxy_net
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
volumes:
caddy_data:
---
# In your app stack (app/docker-compose.yml):
networks:
proxy_net:
external: true
internal:
driver: bridge
internal: true
services:
app:
image: myapp:latest
networks:
- proxy_net
- internal
redis:
image: redis:alpine
networks:
- internal
The app container is reachable by Caddy using its container name as the upstream hostname. Redis stays on the internal network only — completely invisible to the proxy. I use this pattern for everything: one long-running reverse proxy stack, and separate stacks per application that join the shared proxy network only for the frontend service. Start spending more time on your projects and less time managing your infrastructure — if you want a clean Linux VPS to practice this on, create your DigitalOcean account today.
Quick Debugging Reference
Here are the commands I actually use when something stops working:
# List all networks
docker network ls
# Inspect a network — shows connected containers and their IPs
docker network inspect myproject_backend
# Test DNS resolution from inside a container
docker compose exec api nslookup postgres
docker compose exec api ping -c 3 postgres
# Check if a port is reachable from one container to another
docker compose exec api nc -zv postgres 5432
# See which networks a specific container is connected to
docker inspect myproject_api_1 | grep -A 20 '"Networks"'
# Temporarily attach a debug container to a network
docker run --rm -it --network myproject_backend nicolaka/netshoot
The nicolaka/netshoot image is indispensable — it's a container packed with networking tools (curl, dig, nmap, tcpdump, iperf3) that you can drop onto any network for live debugging without modifying your production containers.
Putting It All Together
To summarise the decision tree I use: custom bridge networks for everything on a single host — they give you DNS resolution, isolation, and subnet control. Host mode only when you have a specific performance or broadcast requirement and understand the security trade-offs. Overlay when you're genuinely spanning multiple hosts with Docker Swarm.
The biggest improvement you can make right now if you're not already doing this: stop relying on the default bridge and start naming all your networks explicitly in every compose file. Add internal: true to any network that shouldn't have internet access. This single change will make your stack significantly more secure and easier to reason about.
Next steps: take one of your existing compose stacks and audit the network configuration. Add explicit named networks if you haven't, separate your frontend and backend tiers, and mark database networks as internal. Then explore multi-tier networking for separating IoT, services, and management traffic — the same principles apply at the VLAN level for full homelab segmentation.
Discussion