Firewall Configuration and Port Forwarding for Secure Remote Access
Remote access to your homelab is essential, but it's also where most self-hosters get security wrong. I've spent weeks fixing broken firewall rules and dealing with exposed services—and I've learned that port forwarding without proper firewalling is like leaving your front door unlocked. In this guide, I'll walk you through configuring UFW (Uncomplicated Firewall), setting up intelligent port forwarding, and implementing layered security so you can access your services safely from anywhere.
Understanding Your Security Layers
Before touching a single firewall rule, understand what you're protecting. Your homelab has multiple entry points: SSH for remote administration, HTTP/HTTPS for web services, and potentially VPN tunnels. Each needs a different approach.
I prefer a belt-and-suspenders strategy: UFW handles basic traffic rules at the OS level, port forwarding happens at your router, and then I add a reverse proxy (like Caddy) on top for application-level security. This layering means even if one layer fails, you're not completely exposed.
The first thing I always do is disable all inbound traffic by default, then selectively open only what's needed. This is the principle of least privilege, and it's non-negotiable for remote access.
Setting Up UFW: The Foundation
UFW is included in most Debian/Ubuntu systems. It's a wrapper around iptables that actually makes sense to humans. Here's how I set it up on a fresh server or homelab machine:
# Install UFW (usually already present)
sudo apt update && sudo apt install ufw
# Start with everything blocked
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow SSH first—critical to avoid locking yourself out
sudo ufw allow 22/tcp
# Enable the firewall
sudo ufw enable
# Check status
sudo ufw status verbose
At this point, only SSH inbound is allowed. Everything else is blocked. I test the SSH connection immediately before adding any other rules, because the worst feeling is being locked out of your own server.
Now I add rules for services I actually need. Let's say I'm running a web server on ports 80 and 443, a Jellyfin media server on 8096, and I want to access it remotely:
# HTTP and HTTPS
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Jellyfin (only if directly exposed—I don't recommend this)
# sudo ufw allow 8096/tcp
# Check the rules
sudo ufw status numbered
sudo netstat -tlnp | grep LISTEN and check the bind addresses.Port Forwarding at the Router Level
Port forwarding is where the complexity starts. Your router sits between your homelab and the internet. When someone from outside tries to reach your services, the router needs to know where to send that traffic.
I access my router admin panel (usually 192.168.1.1 or 192.168.0.1), log in, and navigate to Port Forwarding settings. Every router is different, but the concept is identical: external port → internal IP:internal port.
Here's where I almost always make mistakes: I forward port 80 and 443 to my reverse proxy's internal IP (let's say 192.168.1.10), then I make sure that machine has UFW rules allowing those ports. But I also ensure the reverse proxy is the only thing listening on those ports—my applications themselves bind to localhost or specific internal addresses.
Most importantly, I never forward to port 22 directly from the internet. SSH is constantly scanned and attacked. Instead, I either use a VPN to access SSH, or I forward a random high port (like 42222) to internal 22, then update my SSH client config:
# In ~/.ssh/config on my client machine
Host homelab
HostName your.external.ip
Port 42222
User ubuntu
IdentityFile ~/.ssh/id_rsa
This simple obfuscation stops 99% of automated attacks. The remaining 1% will scan all ports anyway, so I also disable password auth and rely on SSH keys.
Using a Reverse Proxy for Application-Level Security
This is the real game-changer. Instead of opening ports for each application, I run a single reverse proxy (Caddy) that listens on 80 and 443, then routes traffic internally. Here's a minimal Caddy config I use:
# /etc/caddy/Caddyfile
example.com {
reverse_proxy localhost:8096
encode gzip
tls [email protected]
}
jellyfin.example.com {
reverse_proxy localhost:8096
}
nextcloud.example.com {
reverse_proxy localhost:8080
tls [email protected]
}
Now my Docker containers and applications bind to localhost:8096, localhost:8080, etc. The firewall allows 80 and 443 in, Caddy terminates TLS, and routes the request internally. Applications never directly face the internet. This architecture is incredibly clean and secure.
If I need to access a service without DNS (useful during testing), I use SSH port forwarding instead of exposing it:
# Forward local 9090 to remote 9090 (e.g., for Cockpit admin panel)
ssh -L 9090:localhost:9090 your.server.ip
# Now visit localhost:9090 in browser
Securing Remote SSH Access
SSH is your gateway to everything. I harden it aggressively. Edit /etc/ssh/sshd_config:
# Disable password authentication
PasswordAuthentication no
PubkeyAuthentication yes
# Disable root login
PermitRootLogin no
# Use a non-standard port (obfuscation, not security, but helpful)
Port 42222
# Restrict key exchange and ciphers (optional, but recommended)
KexAlgorithms curve25519-sha256,[email protected]
Ciphers [email protected],[email protected]
# Restart SSH
sudo systemctl restart sshd
Then I add fail2ban to block brute-force attempts:
sudo apt install fail2ban
sudo systemctl enable fail2ban
# Edit /etc/fail2ban/jail.local
[DEFAULT]
bantime = 3600
maxretry = 3
[sshd]
enabled = true
port = 42222
Now anyone attempting multiple failed SSH logins gets blocked for an hour. Combined with key-only auth, this stops almost all SSH attacks.
DNS and Dynamic IP Management
If you're accessing from a homelab with a residential internet connection, your IP probably changes. I use a dynamic DNS service (like no-ip.com or Duck DNS) to keep my domain pointed at my current IP. Most routers have built-in support for this.
Alternatively, I use Cloudflare Tunnel (formerly Argo), which gives me zero port forwarding hassle—the tunnel handles everything from the server side, and I access my services through Cloudflare's global network. For many people, this is the simplest approach:
# Install cloudflared
sudo apt install cloudflared
# Authenticate
cloudflared tunnel login
# Create tunnel
cloudflared tunnel create homelab
# Configure routing
# cloudflared tunnel route dns homelab example.com
# Start tunnel
cloudflared tunnel run homelab
Testing and Monitoring
After everything's configured, I test from outside my network. I pull out my phone, disable WiFi, and try accessing my services over cellular. I also use online port scanners (like shodan.io or nmap from a VPS) to confirm only my intended ports are open.
For ongoing monitoring, I use simple UFW logging and check it regularly:
# Enable UFW logging
sudo ufw logging on
sudo ufw logging medium
# Check recent blocks
sudo tail -f /var/log/ufw.log
I also enable system logging in Caddy and reverse proxy to catch suspicious requests at the application level. Any sudden spike in 400-series errors might indicate someone probing your reverse proxy.
Real-World Checklist
Before you consider remote access "secure," verify:
- UFW default deny incoming, default allow outgoing ✓
- SSH key authentication only, no passwords ✓
- SSH on non-standard port with fail2ban ✓
- All applications bound to localhost, not 0.0.0.0 ✓
- Reverse proxy (Caddy/Nginx) handling TLS termination ✓
- Only necessary ports forwarded at router ✓
- Regular log review for suspicious activity ✓
- HTTPS enforced everywhere (no plain HTTP to applications) ✓
Next Steps
Firewall and port forwarding are just the beginning. Once you've got this locked down, consider adding a VPN (WireGuard) for an additional security layer, or implement fail2ban rules for your reverse proxy to detect web-based attacks. If you're running multiple services, explore using Authelia for centralized authentication across your homelab—that's another layer of protection between the public internet and your applications.
For those running a full VPS for remote access, consider checking out RackNerd's affordable VPS options, which offer excellent value for a dedicated public IP and full control over your firewall rules. Their KVM and Hybrid Dedicated products are particularly well-suited for homelab enthusiasts who want a reliable entry point into their infrastructure.
Start with the basics—UFW, port forwarding, and a reverse proxy. Get comfortable with those. Then layer on VPN, authentication, and monitoring. Security is incremental, not a single setting. You've got this.
Discussion