Setting Up Nginx as a Reverse Proxy for Multiple Self-Hosted Applications
When you're running multiple self-hosted applications—whether it's Nextcloud, Jellyfin, Vaultwarden, or Open WebUI—you need a single entry point that routes traffic intelligently. I've been using Nginx as a reverse proxy for three years now, and it's genuinely the most reliable, lightweight way to manage dozens of services on a single domain or VPS. Unlike Docker-based proxies that add overhead, Nginx sits lean and mean, consuming barely 10MB of RAM while handling hundreds of concurrent connections.
This tutorial walks you through a production-ready Nginx reverse proxy setup for multiple applications. I'll show you exactly how I configure upstream blocks, SSL certificates, and common gotchas that trip up beginners.
Why Nginx Over Caddy or Traefik?
Before we dive in, let me be clear: I prefer Nginx because it's been battle-tested for 15 years, the configuration syntax is predictable, and you don't need Docker or external orchestration to get it working. Caddy auto-renews SSL certificates (which is nice), but Nginx with certbot does the same thing. Traefik is excellent if you're already deep in Docker Compose, but if you're running a mixed environment—some Docker, some systemd services, some bare metal applications—Nginx feels simpler to reason about.
That said, pick what works for your team. This isn't about being dogmatic.
Prerequisites and Installation
I'm assuming you're running this on a Linux VPS (Ubuntu 22.04 LTS or similar is what I recommend). For reference, you can grab a solid VPS for around $40/year from providers like RackNerd—enough horsepower for a reverse proxy, a couple of Docker services, and room to grow.
First, install Nginx and certbot:
sudo apt update
sudo apt install -y nginx certbot python3-certbot-nginx
sudo systemctl start nginx
sudo systemctl enable nginx
Verify it's running:
sudo systemctl status nginx
You should see active (running). Good. Now let's configure it.
Understanding Nginx Reverse Proxy Blocks
The core concept: Nginx listens on port 80 (and 443 for HTTPS), receives a request, and forwards it to a backend service listening on localhost on some other port. Your Nextcloud might be on port 8080, Jellyfin on 8096, Vaultwarden on 8000—Nginx unifies all of this behind a single domain.
We define upstream blocks (your backend services) and server blocks (the public-facing rules).
A Complete Multi-Application Configuration
Here's a real configuration I use. I'm hosting three applications behind one domain:
sudo nano /etc/nginx/sites-available/multi-app
Paste this:
# Upstream blocks - define your backend services
upstream nextcloud {
server 127.0.0.1:8080;
keepalive 32;
}
upstream jellyfin {
server 127.0.0.1:8096;
keepalive 32;
}
upstream vaultwarden {
server 127.0.0.1:8000;
keepalive 32;
}
# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
location / {
return 301 https://$server_name$request_uri;
}
# Let certbot handle renewal
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
# HTTPS server - main domain
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Modern SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Nextcloud at /cloud
location /cloud {
proxy_pass http://nextcloud;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
}
# Jellyfin at /media
location /media {
proxy_pass http://jellyfin;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
# Vaultwarden at /vault (password manager)
location /vault {
proxy_pass http://vaultwarden;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Root landing page
location / {
return 200 "Services: /cloud (Nextcloud), /media (Jellyfin), /vault (Vaultwarden)\n";
add_header Content-Type text/plain;
}
}
Enable this configuration and test it:
sudo ln -s /etc/nginx/sites-available/multi-app /etc/nginx/sites-enabled/multi-app
sudo rm /etc/nginx/sites-enabled/default
sudo nginx -t
If you see syntax is ok, you're good. Reload Nginx:
sudo systemctl reload nginx
Getting an SSL Certificate
Use certbot to generate a Let's Encrypt certificate automatically:
sudo certbot certonly --nginx -d example.com -d www.example.com
Follow the prompts. Certbot will automatically update your Nginx config with the certificate paths. Set up auto-renewal:
sudo systemctl enable certbot.timer
sudo systemctl start certbot.timer
Verify it's running:
sudo systemctl status certbot.timer
Handling Websockets and Real-Time Features
Upgrade and Connection headers in your proxy block. I've shown this in the Nextcloud section above—that's essential for anything doing long-polling or streaming.For example, if you're proxying Open WebUI (a local LLM interface) that needs persistent connections:
location /ai {
proxy_pass http://127.0.0.1:8111;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
The proxy_buffering off is critical here—it prevents Nginx from buffering responses, which breaks WebSocket streams.
Common Pitfalls I've Hit
Path rewriting confusion: When you proxy to `/cloud`, your backend application receives requests starting with `/cloud`. If your Nextcloud is not configured to serve from `/cloud` (it expects to be at the root), you'll get 404s. Either configure your app to serve from that path, or strip it with rewrite ^/cloud/(.*)$ /$1 break; before the proxy_pass. I prefer configuring the app instead—it's cleaner.
Missing Host header: Always include proxy_set_header Host $host;. Applications use this to generate links and cookies. Without it, Jellyfin might redirect you to http://127.0.0.1:8096 instead of your domain.
Forgetting X-Forwarded headers: Applications need to know the original client IP and that the connection is HTTPS. These headers tell them. Miss them and password resets, CORS, and security features break.
upstream nextcloud { server nextcloud:8080; } or resolve it via Docker's internal DNS. Using 127.0.0.1 won't work because the container can't reach the host's loopback adapter.Testing Your Setup
Once everything is live, test it:
curl -I https://example.com/cloud
curl -I https://example.com/media
curl -I https://example.com/vault
You should get 200 responses (or redirects from your app). Monitor the logs for issues:
sudo tail -f /var/log/nginx/access.log
sudo tail -f /var/log/nginx/error.log
Performance Tuning
By default, Nginx is already pretty lean. But if you're planning to scale beyond a few services, edit /etc/nginx/nginx.conf and tweak these values:
worker_processes auto; # Use all CPU cores
worker_connections 2048; # Connections per worker
http {
# ...
keepalive_timeout 65;
client_max_body_size 1G; # Nextcloud, Immich, etc. need this
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m;
}
Then reload:
sudo systemctl reload nginx
Next Steps: Monitoring and Backups
Now that your reverse proxy is live, monitor it. Set up Uptime Kuma or a similar tool to ping your services every 5 minutes. If Jellyfin goes down, you want to know immediately, not when someone complains.
Also: back up your Nginx configs and your SSL certificates. I keep mine in a Git repo:
cd /etc/nginx && sudo git init && sudo git add -A && sudo git commit -m "Initial"
If you add more services or hit issues, you have history.
Conclusion
Nginx as a reverse proxy is genuinely the backbone of my self-hosting setup. It's simple, fast, and once configured, it just works. The config I've shown you here is battle-tested and can scale from 2 services to 20 without breaking a sweat.
The next logical step is adding authentication in front of all this—something like Authelia or basic HTTP auth—but that's a separate tutorial. For now, this setup gives you a solid foundation. Start with one or two applications, verify the proxy is working, then add more.
Discussion