Securing Docker Containers with Network Policies and Firewalls
I learned the hard way that running Docker containers without proper network isolation is like leaving your front door open and hoping nobody walks in. When I first deployed my homelab with Jellyfin, Nextcloud, and a few database containers all on the default bridge network, I realized any compromised container could reach every other service. This article walks you through the exact isolation strategies I now use—from Docker's native networking to host-level firewall rules—so you don't repeat my mistakes.
Understanding Docker's Default Network Behavior
By default, Docker containers on the same network can communicate freely with each other. The default bridge network allows all containers to reach each other on any port, which is convenient for development but terrible for security. I prefer to think of this as "trust by default," which has no place in a production homelab.
When you run a container, Docker creates a veth interface and connects it to a bridge. All containers on that bridge share a layer-2 domain, meaning they can ARP and talk directly. If your Jellyfin container gets compromised and someone runs a port scan, they'll discover your PostgreSQL container listening on port 5432 inside the network. Even if PostgreSQL isn't exposed to the internet, it's exposed to any container on that network.
The solution is simple: use user-defined bridge networks instead of the default one, and only expose ports you absolutely need. I also implement host-level firewall rules to backstop container-to-container access, because defense in depth matters.
Creating Isolated Networks with Docker Compose
My preferred approach is defining separate networks in Docker Compose. Each application gets its own isolated network, and I explicitly connect only the services that need to talk to each other. Let me show you the pattern I use for a typical homelab stack:
version: '3.8'
networks:
frontend:
driver: bridge
driver_opts:
com.docker.network.bridge.enable_icc: 'false'
database:
driver: bridge
driver_opts:
com.docker.network.bridge.enable_icc: 'false'
monitoring:
driver: bridge
driver_opts:
com.docker.network.bridge.enable_icc: 'false'
services:
jellyfin:
image: jellyfin/jellyfin:latest
ports:
- "8096:8096"
networks:
- frontend
volumes:
- /data/jellyfin:/config
- /media:/media:ro
postgres:
image: postgres:15-alpine
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
networks:
- database
volumes:
- /data/postgres:/var/lib/postgresql/data
expose:
- "5432"
nextcloud:
image: nextcloud:latest
ports:
- "8080:80"
networks:
- frontend
- database
environment:
POSTGRES_HOST: postgres
POSTGRES_DB: nextcloud
depends_on:
- postgres
volumes:
- /data/nextcloud:/var/www/html
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
networks:
- monitoring
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- /data/prometheus:/prometheus
Notice I set enable_icc: 'false' on each network. This disables inter-container communication by default. Jellyfin can't reach Postgres, Prometheus can't reach Nextcloud—only the connections I explicitly define through services that connect to multiple networks are allowed. In this setup, only Nextcloud can reach both the frontend and database networks because it's connected to both.
expose instead of ports for services that only need internal communication. Expose makes the port available within the network but doesn't bind it to the host. This prevents accidental exposure while you're testing.Host-Level Firewall Rules with UFW
Container isolation is layer 2 and 3, but someone could still exploit the Docker daemon itself or break out of a container and reach your host. I always pair container networks with host firewall rules. On my servers, I use UFW (Uncomplicated Firewall) because it's simpler than raw iptables and works well with Docker.
Here's my typical UFW configuration for a homelab server:
#!/bin/bash
# Enable UFW
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow SSH from trusted IPs only (adjust to your network)
sudo ufw allow from 192.168.1.0/24 to any port 22 proto tcp
# Allow Jellyfin (external port)
sudo ufw allow from any to any port 8096 proto tcp
# Allow Nextcloud (external port)
sudo ufw allow from any to any port 8080 proto tcp
# Allow Prometheus only from internal network
sudo ufw allow from 192.168.1.0/24 to any port 9090 proto tcp
# Deny Docker internal subnets from reaching host ports
# This is the key rule: Docker often uses 172.17-172.31.0.0/16
sudo ufw deny from 172.17.0.0/12
# Enable the firewall
sudo ufw enable
# Check status
sudo ufw status verbose
The critical rule is the last one: sudo ufw deny from 172.17.0.0/12. Docker assigns IP addresses from the 172.17.0.0/16 range by default (172.17.0.0 through 172.31.255.255). If someone breaks into a container and gains a shell, they shouldn't be able to reach services on your host machine. This rule blocks that entire subnet from reaching the host's exposed ports.
sudo iptables -t nat -L -n and sudo iptables -L -n. Some Dockerinstallations require additional iptables tweaks.Advanced: Using IPvlan and Macvlan Networks
For more advanced isolation, I occasionally use IPvlan or Macvlan networks instead of bridge networks. These connect containers directly to your physical network with their own IP addresses, bypassing Docker's bridge layer entirely. This is overkill for most homelabs, but it's worth understanding.
With IPvlan, each container gets an IP on your physical network, just like a real machine. This means you can apply your existing network firewall rules (pfsense, pfSense, whatever you use) directly to container traffic. It's elegant, but it requires layer 2 mode and careful planning:
docker network create -d ipvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o ipvlan_mode=l2 \
-o parent=eth0 \
physical_net
docker run -d \
--name nextcloud \
--network physical_net \
--ip 192.168.1.50 \
nextcloud:latest
Now Nextcloud has its own IP address on your LAN and doesn't hide behind Docker's NAT. Your router can firewall it, your monitoring can track it by IP, and it behaves like a real host. The downside: you lose some of Docker's convenience, and you need to manually manage IP allocation to avoid conflicts.
I prefer this for production setups where I want every container behavior visible to my network, but for simpler homelabs, user-defined bridges with UFW are sufficient.
Restrict Container Capabilities
Even with perfect network isolation, a container running as root can do significant damage. I always drop dangerous capabilities and run containers as unprivileged users. Docker's capabilities system is Linux's mechanism for giving specific permissions without full root access.
In my Docker Compose files, I add something like this to every service:
services:
nextcloud:
image: nextcloud:latest
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
- CHOWN
- SETUID
- SETGID
user: "33:33" # www-data user for Nextcloud
# ... rest of config
The cap_drop: ALL removes every Linux capability, then I re-add only the bare minimum. For Nextcloud, it needs to bind to port 80 (NET_BIND_SERVICE), change file ownership (CHOWN), and manage user contexts (SETUID/SETGID). Most services need far fewer capabilities than the default set.
Monitoring and Auditing Network Traffic
The best firewall in the world is useless if you don't notice when something goes wrong. I run Prometheus and Grafana to monitor Docker container network traffic. The cAdvisor exporter (which comes built into Prometheus) shows me network I/O per container, and I can graph it over time.
For real-time debugging, I use tcpdump inside and outside containers:
# Capture traffic on a specific container's network interface
docker exec container_name tcpdump -i eth0 -n 'port 5432'
# Or capture on the host, filtered to Docker bridge traffic
sudo tcpdump -i docker0 -n 'host 172.17.0.2'
I also enable Docker's logging driver to track API calls:
# /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"debug": true
}
Combining container-level isolation, host firewall rules, capability restrictions, and monitoring gives me confidence that my homelab isn't becoming an accidental botnet. It's layered security—no single point of failure.
Testing Your Configuration
Before declaring victory, test your isolation. Spin up a test container on one network and verify it can't reach services on another:
# From inside a container on the frontend network
docker exec container_name_frontend curl postgres:5432
# Should timeout or refuse—not succeed
# From the host, verify UFW is actually blocking Docker subnet traffic
sudo iptables -L -n | grep 172.17
If traffic is still flowing where it shouldn't, check Docker's iptables rules directly. Docker often adds rules that override UFW, especially when you expose ports. The command sudo iptables-save will show you the complete rule set.
Next Steps
Start by auditing your current Docker setup. List all running containers and their networks with docker network ls and docker network inspect bridge. If everything is on the default network, that's your first priority: create isolated user-defined networks in your Compose files.
Then enable UFW and add rules to deny Docker's subnet from reaching your host. Test with a simple curl inside a container to verify isolation. Finally, drop unnecessary capabilities from each service.
For homelabs running only trusted code, this setup is probably overkill. But if you're self-hosting anything accessible from the internet—a Vaultwarden instance, a Gitea server, anything—these layers of isolation will save you when (not if) you discover a vulnerability. I prefer setting it up correctly the first time rather than debugging a compromise later.
Discussion