Setting Up Nginx as a Reverse Proxy for Home Services

Setting Up Nginx as a Reverse Proxy for Home Services

When I first started running multiple services on my homelab—Jellyfin for media, Nextcloud for files, Vaultwarden for passwords—I faced a frustrating problem: I had to remember different ports for each one. Jellyfin on 8096, Nextcloud on 8080, Vaultwarden on 80. Then came SSL certificates, and things got messy fast.

That's when I deployed Nginx as a reverse proxy, and it changed everything. Now all my services live behind a single domain with proper SSL, unified access control, and clean routing. In this guide, I'll show you exactly how I set it up.

Why Nginx Over Caddy or Traefik?

I prefer Nginx because it's lightweight, battle-tested, and requires minimal resources—crucial on a homelab where every CPU cycle counts. Caddy is easier to configure, sure, but Nginx gives me fine-grained control and it'll run smoothly even on a 2GB VPS from RackNerd. I've deployed hundreds of Nginx instances and the configuration syntax feels natural to me now.

Traefik is fantastic if you're orchestrating many Docker containers, but for a static homelab where services don't spin up and down constantly, Nginx wins on simplicity and overhead.

Prerequisites

You'll need:

Installing Nginx

Start with a fresh install and the official Nginx repository to stay current:

sudo apt update
sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring

curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null

echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" | sudo tee /etc/apt/sources.list.d/nginx.list > /dev/null

sudo apt update
sudo apt install nginx certbot python3-certbot-nginx -y

sudo systemctl start nginx
sudo systemctl enable nginx

Verify it's running:

sudo nginx -t

You should see syntax is ok and test is successful.

Creating Your First Reverse Proxy Configuration

Now comes the real work. I keep my service configs in separate files for clarity. Here's my structure:

sudo mkdir -p /etc/nginx/conf.d/services

Create a new config file for Jellyfin (my media server). I prefer to keep each service in its own file:

sudo nano /etc/nginx/conf.d/services/jellyfin.conf

Paste this configuration:

upstream jellyfin_backend {
    server 192.168.1.50:8096;
    keepalive 32;
}

server {
    listen 80;
    server_name jellyfin.home.example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name jellyfin.home.example.com;

    # SSL certificates (we'll generate these with certbot)
    ssl_certificate /etc/letsencrypt/live/jellyfin.home.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/jellyfin.home.example.com/privkey.pem;

    # SSL hardening
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;

    # Security headers
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;

    client_max_body_size 20M;

    # Reverse proxy configuration
    location / {
        proxy_pass http://jellyfin_backend;
        proxy_http_version 1.1;
        
        # Essential headers
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $server_name;

        # WebSocket support
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        # Timeouts
        proxy_connect_timeout 60s;
        proxy_send_timeout 60s;
        proxy_read_timeout 60s;
    }
}

Important: Replace 192.168.1.50 with your actual Jellyfin server IP and home.example.com with your real domain.

Adding More Services

Now add Nextcloud. Create another file:

sudo nano /etc/nginx/conf.d/services/nextcloud.conf
upstream nextcloud_backend {
    server 192.168.1.51:8080;
    keepalive 32;
}

server {
    listen 80;
    server_name nextcloud.home.example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name nextcloud.home.example.com;

    ssl_certificate /etc/letsencrypt/live/nextcloud.home.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/nextcloud.home.example.com/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers on;

    add_header X-Content-Type-Options "nosniff" always;
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-XSS-Protection "1; mode=block" always;

    client_max_body_size 512M;

    location / {
        proxy_pass http://nextcloud_backend;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $server_name;

        proxy_connect_timeout 60s;
        proxy_send_timeout 300s;
        proxy_read_timeout 300s;
    }
}

Notice I increased client_max_body_size to 512M for Nextcloud file uploads and extended timeouts since file operations take longer.

Generating SSL Certificates

Test your Nginx config first:

sudo nginx -t

If that passes, reload:

sudo systemctl reload nginx

Now generate certificates for both domains. I'm assuming you have your domain pointing to your Nginx server's IP:

sudo certbot certonly --nginx -d jellyfin.home.example.com -d nextcloud.home.example.com

Certbot will ask a few questions. Say yes to sharing your email with EFF and accept the terms. Once done, your certificates live in /etc/letsencrypt/live/.

Tip: Set up automatic renewal with sudo systemctl enable certbot.timer. Certbot will check daily and renew certificates 30 days before expiry.

Reload Nginx to activate the SSL certificates:

sudo systemctl reload nginx

Security Hardening

Now that traffic is flowing, let's harden the setup. Edit the main Nginx config:

sudo nano /etc/nginx/nginx.conf

Find the http block and add these settings:

http {
    # ... existing config ...

    # Hide Nginx version
    server_tokens off;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss;

    # Rate limiting for brute-force protection
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;

    include /etc/nginx/conf.d/services/*.conf;
}

Then in your service configs, add rate limiting to sensitive locations:

location /login {
    limit_req zone=login burst=2 nodelay;
    proxy_pass http://nextcloud_backend;
    # ... rest of proxy config ...
}
Watch out: If you're behind a NAT or proxy, Nginx sees all requests coming from your gateway's IP. Set set_real_ip_from 192.168.1.0/24; in the main config and enable real_ip_recursive on; so rate limiting works correctly.

Monitoring and Troubleshooting

Check Nginx logs when things go wrong:

sudo tail -f /var/log/nginx/access.log
sudo tail -f /var/log/nginx/error.log

A 502 Bad Gateway usually means your upstream server is down or the IP is wrong. Verify connectivity:

curl http://192.168.1.50:8096/health

A 504 Gateway Timeout means the request took too long. Increase proxy timeouts in your config.

Making It Production-Ready

Before declaring victory, I always do a few final checks:

For a small homelab, Nginx uses about 10-15MB of RAM idle. Add another 2-3MB per active connection.

Next Steps

Once you've got Nginx running smoothly, consider adding authentication with Authelia for sensitive services, or experimenting with caching for static content. If you're running this on a VPS, RackNerd's KVM VPS offers solid performance for homelab reverse proxies at a fraction of the cost of dedicated hosting.

Your homelab now has a professional front door. Every service is accessible via clean URLs, protected by SSL, and routed intelligently. That's the foundation of a mature self-hosted infrastructure.

Discussion