Deploying a Lightweight VPS Stack: Nginx Proxy Manager, Portainer, and Watchtower from Scratch
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
Every time I spin up a new VPS, I go through the same ritual: Docker, Portainer, Nginx Proxy Manager, Watchtower — in that order. This trio forms the backbone of my self-hosting workflow, and after doing it enough times I've refined it down to a repeatable process that takes under 30 minutes from a blank server. In this tutorial I'll walk you through the complete setup, including the networking quirks that trip people up, how to get Portainer behind a proper SSL subdomain on day one, and why I let Watchtower handle container updates automatically (with some important caveats).
If you're looking for a solid VPS to run this on, I've been using DigitalOcean Droplets — the $6/month shared CPU Droplet with 1 GB RAM handles this whole stack comfortably, with headroom left for three or four lightweight apps.
What You Need Before We Start
This tutorial assumes you have a fresh Ubuntu 24.04 VPS, a domain name with DNS you can control, and you've already done the basics: created a non-root sudo user, disabled root SSH login, and opened ports 80 and 443 in your firewall (UFW: sudo ufw allow 80/tcp && sudo ufw allow 443/tcp). You should also open port 81, which is NPM's admin UI, but only temporarily — we'll lock it down behind SSL shortly.
If you haven't hardened your VPS yet, check out our guide on SSH keys, fail2ban, and UFW first. It's worth 20 minutes of your time.
Step 1: Install Docker and Docker Compose
I always use Docker's official install script rather than the Ubuntu repo version — the repo lags behind by months and I've been burned by compose plugin version mismatches before.
# Remove any old Docker packages
sudo apt remove -y docker docker-engine docker.io containerd runc
# Install prerequisites
sudo apt update && sudo apt install -y ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine and the Compose plugin
sudo apt update && sudo apt install -y \
docker-ce docker-ce-cli containerd.io \
docker-buildx-plugin docker-compose-plugin
# Add your user to the docker group so you don't need sudo every time
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker compose version
You should see something like Docker Compose version v2.27.x. If you still see the old docker-compose binary (with a hyphen) that's fine, but the plugin syntax (docker compose with a space) is what this tutorial uses throughout.
Step 2: Create the Shared Docker Network
This is the step that confuses most people the first time. All three services — and every app you add later — need to be on the same Docker network so that Nginx Proxy Manager can reach the upstream containers by their service name. I create one network up front and reference it everywhere.
docker network create proxy
That's it. The network is called proxy. Every compose file in this tutorial will attach to it using external: true.
Step 3: Deploy the Core Stack
I put all three services in a single compose file. Create a working directory and drop the file in:
mkdir -p ~/stacks/core && cd ~/stacks/core
nano docker-compose.yml
Paste the following:
version: "3.9"
networks:
proxy:
external: true
services:
# ── Nginx Proxy Manager ──────────────────────────────────────────
nginx-proxy-manager:
image: jc21/nginx-proxy-manager:latest
container_name: npm
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "81:81" # Admin UI — lock this down after first login
volumes:
- ./npm/data:/data
- ./npm/letsencrypt:/etc/letsencrypt
networks:
- proxy
# ── Portainer CE ─────────────────────────────────────────────────
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
networks:
- proxy
# No ports exposed directly — NPM will proxy this
# ── Watchtower ───────────────────────────────────────────────────
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true # Remove old images after update
- WATCHTOWER_SCHEDULE=0 0 4 * * * # Run at 4 AM daily (cron format)
- WATCHTOWER_NOTIFICATIONS=shoutrrr
# Replace the line below with your actual notification URL
# - WATCHTOWER_NOTIFICATION_URL=generic://...
networks:
- proxy
volumes:
portainer_data:
Start the stack:
docker compose up -d
docker compose ps # All three should show "running"
[email protected] / changeme — you'll be forced to change them on first login, which is the right call.Step 4: Add DNS Records and Issue SSL Certificates
Before you configure NPM, point two subdomains at your VPS IP in your DNS provider's dashboard:
npm.yourdomain.com→ your VPS IP (A record)portainer.yourdomain.com→ your VPS IP (A record)
DNS propagation takes a few minutes. Once it resolves, open http://YOUR_VPS_IP:81 in a browser, log in, and go to Hosts → Proxy Hosts → Add Proxy Host.
For Portainer, fill in:
- Domain Names:
portainer.yourdomain.com - Scheme:
http - Forward Hostname:
portainer(the container name on the proxy network) - Forward Port:
9000 - Websockets Support: enabled (Portainer needs this)
- On the SSL tab: Request a new Let's Encrypt certificate, toggle "Force SSL" on
Do the same for NPM itself (npm.yourdomain.com → forward to npm:81) so you can stop exposing port 81 to the world. Once the proxy host for NPM is working over HTTPS, block port 81 at the firewall:
sudo ufw deny 81/tcp
sudo ufw reload
npm.yourdomain.com proxy host is working over HTTPS. If you lock yourself out before that, you'll need to temporarily re-allow the port or access the server console directly through your VPS provider's web UI.Step 5: Understanding Watchtower's Schedule
Watchtower is running in the background and will check for image updates every night at 4 AM based on the cron schedule I set (0 0 4 * * *). I like this approach for stable, well-maintained images like NPM, Portainer, and Jellyfin — they rarely ship breaking changes without warning.
That said, I do not let Watchtower touch containers that run databases (Nextcloud's MariaDB, Immich's Postgres) or anything where a major version bump could corrupt a volume. For those, I pin the image tag in the compose file and update manually after reading the changelog.
To exclude a specific container from Watchtower updates, add this label to it in your compose file:
labels:
- "com.centurylinklabs.watchtower.enable=false"
Then set WATCHTOWER_LABEL_ENABLE=true in Watchtower's environment block if you want opt-in behaviour instead of opt-out.
Step 6: Adding Your First App to the Stack
The beauty of this setup is that adding a new app — say, Uptime Kuma — takes about two minutes. Create a new directory, write a minimal compose file that attaches to the proxy network, then add a proxy host in NPM.
mkdir -p ~/stacks/uptime-kuma && cd ~/stacks/uptime-kuma
cat > docker-compose.yml << 'EOF'
version: "3.9"
networks:
proxy:
external: true
services:
uptime-kuma:
image: louislam/uptime-kuma:latest
container_name: uptime-kuma
restart: unless-stopped
volumes:
- uptime_kuma_data:/app/data
networks:
- proxy
volumes:
uptime_kuma_data:
EOF
docker compose up -d
Then in NPM: new proxy host, kuma.yourdomain.com, forward to uptime-kuma:3001, request SSL. Done. Because the container joined the proxy network, NPM can resolve it by service name with zero extra configuration.
Keeping Everything Organised in Portainer
Once Portainer is up, I use it primarily for two things: watching container logs in real time (much faster than SSH + docker logs) and quickly restarting a misbehaving container without touching the terminal. I still write and maintain all my compose files on disk — I don't use Portainer's Stacks UI to deploy from the web, because I want my source of truth to be files I can commit to a private Gitea repo.
If you're running this on a DigitalOcean Droplet, you can snapshot your Droplet at this point and use it as a base image for future deployments. Spin up a new Droplet from the snapshot, update the DNS records, and your entire stack is live again in under five minutes.
container_name. Without it, Docker generates random names like core-nginx-proxy-manager-1, which are fine in NPM's forward hostname field but make reading logs and Portainer's UI noticeably more annoying.Conclusion
With this three-service foundation — Nginx Proxy Manager for routing and SSL, Portainer for visibility, and Watchtower for automated updates — you have a robust management layer that scales with every new app you add. The shared proxy network is the key architectural decision here: get that right and adding new services stays easy indefinitely.
Next, I'd suggest deploying Vaultwarden as your first real app on top of this stack — it's lightweight, the Docker image is well-maintained, and it's genuinely useful within minutes of setup. After that, check out our guide on hardening your self-hosted apps with fail2ban and CrowdSec to add another layer of protection around your new public-facing services.
Discussion