Zero-Trust Security Architecture for Your Home Server Infrastructure
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
When I first started self-hosting, I built a perimeter-based security model: lock down the firewall, forward a few ports, and assume everything inside was trusted. That works until it doesn't. A compromised container, a leaked SSH key, or an insider threat (your own curiosity) can blow that model to pieces. I've since migrated to zero-trust security architecture—and it's transformed how I think about infrastructure risk.
Zero-trust isn't about being paranoid; it's about being realistic. Every request, every device, every user is treated as a potential threat until proven otherwise. No implicit trust. No "inside the network is safe." Just continuous verification.
What Zero-Trust Actually Means
Zero-trust is a security framework built on three pillars: verify identity, validate device health, and enforce least-privilege access. Unlike traditional "trust the perimeter" models, zero-trust assumes the network is hostile and every connection must prove its legitimacy.
In practice, this means:
- Identity verification: Who are you? Biometric, MFA, certificate-based auth, or combinations thereof.
- Device trust: Is your device healthy? No malware? Updated? Valid credentials stored securely?
- Least privilege: You get exactly what you need, no more. A container running your media server doesn't need access to your password vault.
- Continuous monitoring: Trust is earned and revoked. One suspicious action and access is challenged again.
For homelabs, this prevents lateral movement. If an attacker compromises your Jellyfin instance, they can't pivot to Nextcloud just because both live on the same Docker network.
Identity Layer: Know Who You Are
The foundation of zero-trust is knowing exactly who's accessing what. In a homelab, this means moving beyond simple passwords.
I use Authelia as my identity provider. It sits in front of my applications, enforces MFA, manages sessions, and logs every authentication attempt. Combined with a reverse proxy (I use Caddy), every request to my internal services goes through Authelia first.
Here's a Caddy + Authelia setup that I've tested extensively:
version: '3.8'
services:
authelia:
image: authelia/authelia:latest
container_name: authelia
restart: unless-stopped
environment:
AUTHELIA_NOTIFIER_SMTP_PASSWORD: your_app_password
AUTHELIA_SESSION_SECRET: $(openssl rand -hex 32)
AUTHELIA_STORAGE_ENCRYPTION_KEY: $(openssl rand -hex 32)
volumes:
- ./authelia/configuration.yml:/config/configuration.yml:ro
- ./authelia/users_database.yml:/config/users_database.yml
ports:
- "9091:9091"
networks:
- internal
caddy:
image: caddy:latest
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
networks:
- internal
depends_on:
- authelia
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
volumes:
- ./jellyfin/config:/config
- ./jellyfin/cache:/cache
- /media:/media:ro
networks:
- internal
networks:
internal:
driver: bridge
volumes:
caddy_data:
caddy_config:
And the Caddyfile that ties it together:
jellyfin.home.local {
forward_auth 127.0.0.1:9091 {
uri /api/verify?rd=https://auth.home.local
copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
}
reverse_proxy 127.0.0.1:8096
}
auth.home.local {
reverse_proxy 127.0.0.1:9091
}
*.home.local {
forward_auth 127.0.0.1:9091 {
uri /api/verify?rd=https://auth.home.local
copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
}
}
openssl rand -hex 32 and never commit them to version control. Use environment files or a secrets management system like Vault.Every request now goes through Authelia. If the user isn't authenticated, they're redirected to the login page. TOTP, FIDO2 keys, or email one-time passwords can all be enforced. You've gained visibility and control.
Device Trust: Verify Hardware and Software State
Identity alone isn't enough. A compromised laptop—even one with the right password—shouldn't be trusted.
For home infrastructure, I check device health in a few ways:
- SSH key management: Only passwordless auth. Keys rotate quarterly. Dead keys are immediately revoked.
- Certificate-based authentication: Internal services use mutual TLS (mTLS). Clients present a certificate; the server validates it. If revoked, access stops.
- Device MDM (for homelabs): This is overkill for most, but I use Tailscale's node key rotation and device tagging to track hardware state.
For containers, I enforce:
- Read-only root filesystems where possible
- No privilege escalation (--cap-drop=ALL)
- User namespaces isolated from the host
- Network policies restricting inter-container communication
A compromised container can't become root, can't write to system directories, and can't reach services it shouldn't access.
Least-Privilege Access: Grant Minimum Required Permissions
This is where zero-trust saves you when things go wrong. Each application gets a single, narrowly scoped role.
In Docker, I use read-only volumes and explicit capability dropping:
version: '3.8'
services:
nextcloud:
image: nextcloud:fpm
container_name: nextcloud
restart: unless-stopped
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
tmpfs:
- /tmp
- /var/tmp
volumes:
- ./nextcloud/config:/var/www/html/config
- ./nextcloud/data:/var/www/html/data
- ./nextcloud/themes:/var/www/html/themes
environment:
POSTGRES_HOST: postgres
POSTGRES_USER: nextcloud
POSTGRES_PASSWORD: secure_password
networks:
- app
depends_on:
- postgres
postgres:
image: postgres:15-alpine
container_name: postgres
restart: unless-stopped
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
read_only: true
tmpfs:
- /var/run/postgresql
- /tmp
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: nextcloud
POSTGRES_USER: nextcloud
POSTGRES_PASSWORD: secure_password
networks:
- app
networks:
app:
driver: bridge
volumes:
postgres_data:
Notice: Nextcloud only has access to its config and data directories. It can't read your system files, can't escalate privileges, and can't bind arbitrary ports. PostgreSQL is locked down similarly. If Nextcloud is compromised, the blast radius is tiny.
read_only: true if they write to their own installation directory. Test thoroughly. If needed, mount only the writable paths as read-write and keep the rest read-only.Network Segmentation and Micro-Segmentation
Zero-trust requires carving your network into trust zones. I segment mine into:
- Management: SSH, Portainer, monitoring. Only accessed from my laptop or a trusted bastion host.
- Internal services: Nextcloud, Jellyfin, email, etc. Behind Authelia. No external direct access.
- External-facing: Reverse proxy only. Anything else is denied by default.
- Untrusted: Guest WiFi, IoT devices. Isolated VLAN with no access to internal services.
For Docker, I create separate networks for each trust zone and use firewall rules to enforce boundaries:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow from 192.168.1.0/24 to any port 22 # SSH from LAN only
sudo ufw allow to any port 80 # HTTP external
sudo ufw allow to any port 443 # HTTPS external
sudo ufw enable
Then, in Docker Compose, I isolate services by network:
networks:
external:
driver: bridge
internal:
driver: bridge
management:
driver: bridge
Services on the external network cannot reach internal services. A compromised external-facing app is trapped.
Continuous Monitoring and Verification
Zero-trust doesn't mean you verify once and trust forever. Continuous verification catches anomalies early.
I log authentication events to a centralized location and alert on:
- Failed authentication attempts (threshold: 5 in 10 minutes = lock)
- Successful auth from new locations or devices
- Privilege escalation attempts
- Unexpected port scans or connection attempts
Fail2ban on the host catches brute-force attempts before they reach your applications:
# /etc/fail2ban/jail.local
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5
[sshd]
enabled = true
port = ssh
logpath = /var/log/auth.log
[caddy-auth]
enabled = true
port = http,https
logpath = /var/log/caddy/access.log
filter = caddy-auth
maxretry = 5
After 5 failed logins in 10 minutes, the IP is blocked for an hour. This stops credential stuffing and password spray attacks.
Adding External Infrastructure: VPS and Bastion Hosts
If you're extending your homelab to a VPS (many of us do), zero-trust applies there too. A cheap VPS—around $40/year from providers like RackNerd—can serve as a hardened bastion host: a single, monitored entry point to your home infrastructure via Tailscale or WireGuard.
Your VPS becomes a reverse proxy that tunnels back to your home server. Your home lab stays off the public internet entirely. Only the VPS is exposed, and it's heavily locked down.
The architecture looks like:
- VPS: Caddy reverse proxy + Authelia identity layer
- Home server: Behind bastion, accessible only via authenticated tunnel
- Your device: Connects to VPS, authenticated, routed back through tunnel
Attackers see only the bastion. Your actual infrastructure is hidden.
Practical Next Steps
Start small. Pick one service—maybe your password manager or file server—and run it through Authelia with MFA. See how it feels. Then extend: add network segmentation with UFW rules, drop unnecessary capabilities in Docker, enable fail2ban.
Zero-trust isn't a binary state. It's a maturity model. Begin with identity verification, move to device trust and least privilege, then add continuous monitoring. Over weeks and months, your attack surface shrinks dramatically.
The goal isn't perfection; it's resilience. When something goes wrong—and in this space, something always does—zero-trust limits the damage and makes recovery faster.
Discussion