Docker Networking Fundamentals: Container Communication and DNS Resolution
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
When I first started self-hosting services in Docker, I made a rookie mistake: I couldn't figure out why my application containers couldn't talk to each other. I'd hardcoded IP addresses, used `localhost`, and spent hours debugging what turned out to be a fundamental misunderstanding of how Docker networking actually works. Once I grasped how Docker DNS and bridge networks function, everything clicked. Today I'm going to walk you through the networking concepts that'll save you from the same frustration, whether you're running a homelab on an old laptop or spinning up services on a budget VPS like RackNerd's $40/year offerings.
Why Docker Networking Matters for Your Homelab
Docker networking is the plumbing that lets your containers find and communicate with each other. Without understanding it, you're left guessing why a web app can't reach its database, or why service names don't resolve. The good news: Docker's networking model is elegant once you understand the layers.
I prefer Docker's default bridge networking for small homelabs because it's simple, reliable, and requires minimal configuration. You get automatic DNS resolution by container name out of the box, which is incredibly powerful. If your web frontend is in a container called `app` and your database is called `db`, they can talk using `http://app:3000` or `postgres://db:5432` without any manual DNS records.
Docker's Three Core Network Drivers
Docker ships with three built-in network drivers you should know:
Bridge networks are the default. When you run a container, Docker connects it to a private bridge network where it can reach other containers on the same network by name. Each container gets its own IP address on the bridge, and Docker's embedded DNS server handles name resolution.
Host networks bypass Docker's networking entirely. A container on the host network shares the host's network namespace, meaning it uses the host's IP address directly. I use this rarely — mainly for performance-critical monitoring tools or when I need to bind to privileged ports below 1024.
Overlay networks are for Docker Swarm (which I don't use on homelabs — Kubernetes or simple Docker Compose is better). They're useful for multi-host clusters but overkill for most self-hosted setups.
For 99% of homelab work, you'll use bridge networks, either the default `bridge` or custom named networks you create yourself.
Default Bridge vs. Custom Bridge Networks
Here's a critical distinction that tripped me up early: the default `bridge` network and custom bridge networks behave differently.
The default `bridge` network does NOT have automatic DNS resolution by service name. If you run two containers on the default bridge and try to reach one by hostname, it'll fail. You'd have to hardcode IP addresses, which is fragile because container IPs change on restart.
Custom bridge networks, on the other hand, have Docker's embedded DNS server built in. If you create a network called `myapp-net` and attach both containers to it, they can reach each other by container name automatically. This is what you want.
Container DNS Resolution in Action
When a container tries to reach another container by name, here's what happens:
1. The container's `/etc/resolv.conf` points to Docker's embedded DNS server (usually 127.0.0.11:53)
2. Docker's DNS server looks up the container name in its internal registry
3. If the container exists on the same network, Docker returns its IP address
4. The requesting container connects to that IP
This only works if both containers are on the same custom bridge network. If they're on different networks, they can't see each other by name — they'd need to be on an overlay network or you'd have to expose ports and use the host's network.
Practical Docker Compose Example with Networking
Let me show you a real-world Docker Compose setup I use for a small homelab stack with Nginx, a Node app, and PostgreSQL. All three will communicate by service name:
version: '3.8'
services:
nginx:
image: nginx:alpine
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- homelab-net
restart: unless-stopped
app:
image: node:18-alpine
container_name: myapp
environment:
- DATABASE_URL=postgres://postgres:secretpassword@db:5432/appdb
- NODE_ENV=production
volumes:
- ./app:/app
working_dir: /app
command: npm start
networks:
- homelab-net
restart: unless-stopped
depends_on:
- db
db:
image: postgres:15-alpine
container_name: mydb
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secretpassword
- POSTGRES_DB=appdb
volumes:
- ./postgres-data:/var/lib/postgresql/data
networks:
- homelab-net
restart: unless-stopped
networks:
homelab-net:
driver: bridge
Notice several things here:
The `networks` key under each service tells Docker which networks to attach the container to. All three services attach to `homelab-net`, a custom bridge network defined at the bottom.
The app's `DATABASE_URL` uses `db:5432` — that's the service name, not an IP. Docker's DNS resolves this automatically. I don't need to know or care what IP address the db container gets.
The `depends_on` directive under `app` tells Docker Compose to start the `db` service before the `app` service. This doesn't guarantee the database is ready to accept connections, but it ensures the container starts first.
When I run `docker-compose up`, Docker creates the `homelab-net` network, starts all three containers attached to it, and they can communicate by service name immediately.
Debugging Container-to-Container Communication
When networking breaks, here's my diagnostic workflow:
First, verify the container is running and on the right network:
docker network inspect homelab-net
This shows all containers attached to the network and their IP addresses. If your container isn't listed, it's not on the network.
Next, test DNS resolution from inside a container:
docker exec myapp ping -c 2 db
This pings the `db` container from inside the `myapp` container. If it works, you get a response. If DNS is broken, you'll get "Name or service not known."
If DNS works but the connection fails, test the port:
docker exec myapp nc -zv db 5432
The `-zv` flags tell `nc` (netcat) to try a connection without sending data. If the database is listening on port 5432, you'll see "succeeded." If the port is closed, the database isn't running or isn't listening on that port.
Check the container logs:
docker logs myapp
docker logs db
Look for error messages about failed connections, binding issues, or configuration problems.
Network Isolation and Security Considerations
One benefit of custom bridge networks that I initially underestimated: they provide isolation. A container on `homelab-net` cannot reach a container on a different network by name, even if both are on the host. You can create separate networks for different application stacks to enforce isolation.
For example, I run my media stack (Jellyfin, Radarr, etc.) on a separate network from my productivity stack (Nextcloud, Gitea). If a vulnerability in Jellyfin is exploited, the attacker can't easily pivot to my Nextcloud instance because they're not on the same network.
I can still connect services across networks by exposing ports, but that requires explicit configuration. It's a small but real security boundary.
Real Homelab Scenario: Multi-App Setup
Here's how I'd structure a homelab with multiple independent applications, each with its own network:
version: '3.8'
services:
# Media stack
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
ports:
- "8096:8096"
volumes:
- ./jellyfin-config:/config
- /mnt/media:/media:ro
networks:
- media-net
restart: unless-stopped
# Productivity stack
nextcloud:
image: nextcloud:latest
container_name: nextcloud
ports:
- "8080:80"
environment:
- MYSQL_HOST=nextcloud-db
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=securepassword
- MYSQL_DATABASE=nextcloud
volumes:
- ./nextcloud:/var/www/html
networks:
- productivity-net
restart: unless-stopped
depends_on:
- nextcloud-db
nextcloud-db:
image: mariadb:latest
container_name: nextcloud-db
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=securepassword
- MYSQL_DATABASE=nextcloud
volumes:
- ./nextcloud-db:/var/lib/mysql
networks:
- productivity-net
restart: unless-stopped
networks:
media-net:
driver: bridge
productivity-net:
driver: bridge
Each stack lives on its own network. Jellyfin can't reach the Nextcloud database because they're not connected. If I need to share data between stacks, I'd use bind mounts to shared host directories, not network communication.
Port Exposure and External Access
A container attached to a bridge network can talk to other containers on the same network without any port exposure. But if you want external traffic (from the host or the internet) to reach a service, you must use the `ports` directive in Docker Compose.
`ports: "8080:80"` means "listen on the host's port 8080, forward traffic to the container's port 80." The container's port 80 is still only reachable from inside the container and other containers on its network. The host port 8080 is what the outside world sees.
This is why reverse proxies like Nginx or Caddy are so powerful in homelabs: you expose the reverse proxy's port to the internet (ports 80 and 443), and the reverse proxy talks to backend services over the bridge network. The backend services never touch the internet directly.
Troubleshooting DNS: Common Gotchas
Gotcha 1: Using `localhost` inside containers. If your Node app tries to connect to `localhost:5432`, it's looking for a database on the same container. It won't find the database container. Use the service name instead: `db:5432`.
Gotcha 2: Containers on the default bridge network. If you run `docker run -d myservice` without specifying a network, Docker attaches it to the default bridge, which doesn't have DNS resolution. Always use `--network custom-net` or use Docker Compose, which handles this automatically.
Gotcha 3: Case sensitivity in DNS. Docker's DNS is case-insensitive, but your application might not be. Use lowercase service names to be safe.
Gotcha 4: Changing network addresses on restart. A container's IP address on a bridge network is not guaranteed to stay the same across restarts. This is why service names matter — they're stable even if IPs change. Never hardcode container IPs.
Next Steps: From Understanding to Implementation
Now that you understand how Docker networking and DNS work, here's what I'd recommend:
Step 1: Review any existing Docker Compose files you have. Check if all services are on the same custom network. If not, add a `networks` section and attach them.
Step 2: Test inter-container communication. Use `docker exec` and `ping` or `nc` to verify services can reach each other by name.
Step 3: Set up a simple multi-container stack (web app + database) and verify they communicate without hardcoded IPs.
Step 4: Once you're comfortable, implement network isolation: run separate application stacks on separate networks and see how it improves your mental model of your infrastructure.
If you're running this on a VPS and want to avoid the complexity of setting up everything from scratch, consider starting with a provider like RackNerd that offers reliable infrastructure without breaking the bank. Good networking knowledge translates from homelab to VPS — the principles are identical.