Securing Your Homelab with SSL/TLS Certificates and HTTPS
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
Running a homelab means exposing services to your network—and increasingly, to the internet. Without SSL/TLS certificates and HTTPS, your traffic is unencrypted and vulnerable. I've been securing my homelab for years, and the difference between HTTP and a properly configured HTTPS setup is night and day. In this guide, I'll walk you through obtaining certificates, configuring them across your services, and automating renewal so you never have to think about it again.
Why HTTPS Matters in Your Homelab
You might think: "My homelab is behind a firewall. Do I really need HTTPS?" The answer is absolutely yes. Here's why:
- Network traffic interception: Even on your local network, if you're accessing services over HTTP, anyone with network access can intercept credentials, session tokens, and sensitive data.
- Remote access security: If you expose any homelab service to the internet—through a reverse proxy, VPN tunnel, or Cloudflare—HTTPS is non-negotiable. It's the only thing standing between your credentials and attackers.
- Browser trust: Modern browsers mark HTTP sites as "not secure." If you're accessing your own services, you want the green lock icon, not a scary warning.
- API and third-party integrations: Many self-hosted apps (Nextcloud, Jellyfin, Home Assistant) integrate with mobile apps and third-party services that require HTTPS.
When I first set up my homelab, I ignored this and used HTTP everywhere internally. Within months, I had a credentials leak from someone sniffing traffic on my IoT network segment. That was the wake-up call I needed.
Understanding Certificate Types
There are three main approaches to certificates in homelab scenarios:
Let's Encrypt (Free, Public Domains)
If you own a domain and can expose port 80 or 443 to the internet, Let's Encrypt is your best option. It's free, automated, and trusted by all browsers. The downside: you need a public domain and must handle DNS or HTTP validation.
Self-Signed Certificates (No Cost, Internal Use)
Perfect for internal-only services. You generate your own certificate and distribute the CA to your machines. No external dependencies, no domain required. The catch: browsers will complain until you trust the certificate.
Private CA (Best for Complex Homelabs)
If you have many internal services, running your own certificate authority (CA) is elegant. You create one CA, sign certificates for all your internal domains, and distribute the CA once to all clients.
For most homelabs, I recommend a hybrid approach: Let's Encrypt for anything exposed to the internet, and self-signed or private CA for internal services.
Let's Encrypt with Certbot: The Standard Approach
Certbot is the easiest way to obtain and renew Let's Encrypt certificates. I'll show you how to get a certificate for a public domain and automate renewal.
Installation and Basic Usage
# On Ubuntu/Debian
sudo apt-get update
sudo apt-get install certbot python3-certbot-dns-cloudflare -y
# Standalone mode (requires port 80 open to internet)
sudo certbot certonly --standalone -d example.com -d www.example.com
# DNS validation (works even if port 80 is blocked)
sudo certbot certonly --dns-cloudflare -d example.com -d www.example.com
When you run this for the first time, Certbot prompts you to agree to terms and provide an email. The certificate is stored in /etc/letsencrypt/live/example.com/.
Configuring Nginx with the Certificate
Once you have the certificate, configure your reverse proxy to use it. Here's a working Nginx config for a self-hosted service:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com;
# Let's Encrypt certificate paths
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Strong SSL settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# HSTS (strict transport security)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Proxy to backend service
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Redirect HTTP to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
Reload Nginx and test:
sudo nginx -t
sudo systemctl reload nginx
curl -I https://example.com
Automating Renewal
Certbot includes a systemd timer that automatically renews certificates 30 days before expiration. Check that it's enabled:
sudo systemctl status certbot.timer
sudo systemctl enable certbot.timer
# Test renewal manually (doesn't count against rate limits)
sudo certbot renew --dry-run
For services like Docker containers that need the certificate in a specific location, you can create a renewal hook:
sudo nano /etc/letsencrypt/renewal-hooks/post/restart-services.sh
#!/bin/bash
# Copy certificate to Docker volume and restart container
cp /etc/letsencrypt/live/example.com/fullchain.pem /var/lib/docker/volumes/certs/_data/
cp /etc/letsencrypt/live/example.com/privkey.pem /var/lib/docker/volumes/certs/_data/
docker restart my-app
Make it executable:
sudo chmod +x /etc/letsencrypt/renewal-hooks/post/restart-services.sh
Now when Certbot renews, it automatically updates your services.
sudo tail -f /var/log/letsencrypt/letsencrypt.log. I recommend setting up monitoring alerts for renewal failures using Uptime Kuma or similar.Self-Signed Certificates for Internal Services
For services that never leave your network, self-signed certificates are faster and simpler. I use this for internal Docker services, local development, and management interfaces.
Generate a certificate valid for 10 years:
# Generate private key
openssl genrsa -out private.key 2048
# Generate self-signed certificate
openssl req -new -x509 -key private.key -out certificate.crt -days 3650 \
-subj "/C=US/ST=State/L=City/O=Organization/CN=homelab.local"
# Combine for services that need both
cat private.key certificate.crt > combined.pem
# Verify
openssl x509 -in certificate.crt -text -noout
For multiple internal domains (e.g., nextcloud.local, jellyfin.local), create a config file and a wildcard certificate:
cat > san.conf << EOF
[req]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = req_distinguished_name
req_extensions = v3_req
[req_distinguished_name]
C = US
ST = State
L = City
O = Organization
CN = *.homelab.local
[v3_req]
subjectAltName = DNS:*.homelab.local,DNS:homelab.local
EOF
openssl req -new -x509 -days 3650 -nodes -out /etc/ssl/certs/homelab.crt \
-keyout /etc/ssl/private/homelab.key -config san.conf
Then use these in your Nginx config (same as the Let's Encrypt example, but point to the self-signed paths). After deployment, add the certificate to your trusted store on each client:
# On Linux
sudo cp homelab.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
# On macOS
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain homelab.crt
# On Windows (PowerShell as Admin)
Import-Certificate -FilePath "C:\path\to\homelab.crt" -CertStoreLocation Cert:\LocalMachine\Root
Caddy: SSL Made Trivial
If you want to eliminate certificate management entirely, switch to Caddy. It handles SSL/TLS automatically for any domain you point to it, whether public or internal.
Here's a complete Docker Compose setup with Caddy:
version: '3.8'
services:
caddy:
image: caddy:latest
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- homelab
# Example backend service
nextcloud:
image: nextcloud:latest
container_name: nextcloud
restart: unless-stopped
environment:
- NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.example.com
volumes:
- nextcloud_data:/var/www/html
networks:
- homelab
volumes:
caddy_data:
caddy_config:
nextcloud_data:
networks:
homelab:
driver: bridge
Create the Caddyfile:
cat > Caddyfile << EOF
# Public domain with automatic HTTPS
nextcloud.example.com {
reverse_proxy nextcloud:80
}
# Internal domain with self-signed cert
nextcloud.local {
reverse_proxy nextcloud:80
tls internal
}
# Multiple services
example.com {
reverse_proxy localhost:8080
}
api.example.com {
reverse_proxy localhost:3000
}
# Catch-all for admin interface
localhost:8443 {
tls internal
reverse_proxy localhost:9090
}
EOF
Start it:
docker-compose up -d caddy
That's it. Caddy handles certificate acquisition, renewal, and deployment automatically. For public domains, it uses Let's Encrypt. For internal domains, it generates and manages self-signed certificates on the fly. This is why I prefer Caddy for new homelab builds—it removes an entire class of operational headaches.
Certificate Pinning and Validation
If you're running critical services or custom applications, consider certificate pinning—forcing clients to accept only a specific certificate. This prevents man-in-the-middle attacks even if a certificate authority is compromised.
In your application config, store the certificate's public key hash:
# Extract public key and hash it
openssl x509 -in certificate.crt -pubkey -noout | openssl pkey -pubin -outform DER | openssl dgst -sha256 -binary | base64
Use this hash in your application's pinning configuration (varies by framework, but most have a way to verify certificate pins on requests).
Monitoring Certificate Expiration
Even with automation, certificates can fail to renew silently. I monitor all my certificates with a simple script:
#!/bin/bash
# Check certificate expiration (runs daily via cron)
for cert in /etc/letsencrypt/live/*/fullchain.pem /etc/ssl/certs/*.crt; do
if [ -f "$cert" ]; then
expiry=$(openssl x509 -in "$cert" -noout -enddate | cut -d= -f2)
expiry_epoch=$(date -d "$expiry" +%s)
now_epoch=$(date +%s)
days_left=$(( ($expiry_epoch - $now_epoch) / 86