How to Monitor Your Homelab with Uptime Kuma and Grafana Using Docker
We earn commissions when you shop through the links on this page, at no additional cost to you. Learn more.
Nothing stings like waking up to a dead Jellyfin instance or a Nextcloud that's been down for six hours while you slept. I learned that lesson the hard way before I finally built a proper monitoring stack. Today I'm going to walk you through deploying Uptime Kuma for uptime and alerting, paired with Grafana for rich time-series dashboards — all wired together with Docker Compose in under an hour.
Uptime Kuma handles the "is it up?" question brilliantly with a polished UI and dozens of notification integrations. Grafana answers the "how is it behaving?" question with metrics, graphs, and historical data. Together they cover every angle of homelab observability without costing you a cent in licensing fees. Let's build it.
Prerequisites
You'll need a host running Docker Engine 24+ and Docker Compose v2. I'm running this on a small Debian 12 VM, but it works equally well on a bare-metal Ubuntu box or even a Raspberry Pi 4. You should also have basic familiarity with editing YAML files. If you want to expose this stack publicly, you'll need a reverse proxy with TLS — I use Caddy, but that's a separate tutorial.
If you're running this on a VPS rather than local hardware, DigitalOcean Droplets are a solid choice — their $6/month basic Droplet handles this entire stack with room to spare, and the 99.99% SLA means your monitoring host is unlikely to be the thing that goes down.
Directory Structure
Before writing any compose files, I like to create a dedicated directory tree so data volumes stay organised and easy to back up:
mkdir -p ~/homelab-monitor/{uptime-kuma,grafana/{data,provisioning/{datasources,dashboards}}}
cd ~/homelab-monitor
The Docker Compose File
Here is the full docker-compose.yml I use. I keep Uptime Kuma and Grafana on the same custom bridge network so they can talk to each other by container name. I also include Prometheus as a metrics scraper — Grafana needs a data source, and Prometheus is the standard choice for pulling container and host metrics.
cat <<'EOF' > docker-compose.yml
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
restart: unless-stopped
ports:
- "3001:3001"
volumes:
- ./uptime-kuma:/app/data
networks:
- monitor-net
prometheus:
image: prom/prometheus:v2.51.2
container_name: prometheus
restart: unless-stopped
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.retention.time=30d"
networks:
- monitor-net
node-exporter:
image: prom/node-exporter:v1.8.1
container_name: node-exporter
restart: unless-stopped
pid: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- "--path.procfs=/host/proc"
- "--path.sysfs=/host/sys"
- "--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)"
networks:
- monitor-net
grafana:
image: grafana/grafana-oss:10.4.2
container_name: grafana
restart: unless-stopped
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=changeme_now
- GF_USERS_ALLOW_SIGN_UP=false
volumes:
- ./grafana/data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
networks:
- monitor-net
volumes:
prometheus-data:
networks:
monitor-net:
driver: bridge
EOF
GF_SECURITY_ADMIN_PASSWORD to something strong before you start the stack. Grafana's default admin password is wide open, and if you're port-forwarding 3000 to the internet even temporarily, you will get hit by scanners within minutes.Configuring Prometheus
Prometheus needs to know what to scrape. Create prometheus.yml in the same directory:
cat <<'EOF' > prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
- job_name: "node-exporter"
static_configs:
- targets: ["node-exporter:9100"]
EOF
The key thing here is using the container name node-exporter rather than localhost. Because all containers share the monitor-net bridge network, Docker's internal DNS resolves container names automatically. This is one of those small details that trips people up when they're new to custom bridge networks.
Bringing the Stack Up
With both files in place, spin everything up:
docker compose up -d
docker compose ps
After about 30 seconds you should see all four containers in the running state. Hit http://YOUR_HOST_IP:3001 for Uptime Kuma and http://YOUR_HOST_IP:3000 for Grafana.
Setting Up Uptime Kuma
On first launch, Uptime Kuma prompts you to create an admin account. Do that, then click Add New Monitor. I set up the following monitor types across my homelab:
- HTTP(S) — for every web-facing service (Jellyfin, Immich, Vaultwarden, Gitea)
- TCP Port — for things like my SSH bastion on port 22 or a Minecraft server
- DNS — to verify my AdGuard Home is resolving correctly
- Docker Container — Kuma can connect to the Docker socket to monitor container health directly
For notifications, I prefer the Telegram integration because it's instant and free. Uptime Kuma also supports Discord, Slack, email via SMTP, PagerDuty, and about 90 others. Set your notification channel under Settings → Notifications first, then attach it to each monitor.
Connecting Prometheus to Grafana
Log into Grafana at port 3000 with the admin credentials you set in the compose file. Navigate to Connections → Data Sources → Add data source, choose Prometheus, and set the URL to http://prometheus:9090. Click Save & Test — you should see a green "Successfully queried the Prometheus API" message.
Now import a dashboard. The easiest starting point is the official Node Exporter Full dashboard. Go to Dashboards → Import and enter dashboard ID 1860. Select your Prometheus data source and click Import. You'll immediately get CPU usage, memory, disk I/O, network throughput, and system load graphs for your host — all populated from live data.
Persistent Volumes and Backups
Notice that Uptime Kuma stores its SQLite database in ./uptime-kuma/kuma.db and Grafana stores dashboards and users in ./grafana/data. I back these up nightly with a simple cron job that tarballs the entire ~/homelab-monitor directory and ships it to Backblaze B2. Losing your monitoring config is more painful than it sounds — rebuilding 40 monitors from scratch is not fun.
Exposing the Stack Safely
I only expose these services internally via Tailscale. Both Uptime Kuma (3001) and Grafana (3000) are blocked at the UFW level from the public internet:
# Allow access only from your Tailscale subnet (100.64.0.0/10)
sudo ufw allow in on tailscale0 to any port 3000
sudo ufw allow in on tailscale0 to any port 3001
sudo ufw allow in on tailscale0 to any port 9090
# Deny everything else on those ports
sudo ufw deny 3000
sudo ufw deny 3001
sudo ufw deny 9090
sudo ufw reload
If you don't use Tailscale, put Caddy or Nginx Proxy Manager in front and enforce authentication before letting anything through. Never leave Grafana or Uptime Kuma naked on the internet — the credential stuffing bots will find them.
What I Monitor and Why
After running this stack for several months, my Uptime Kuma dashboard has grown to 28 monitors. The ones that have actually caught real outages: my Cloudflare tunnel health check (it went down silently twice), my Vaultwarden HTTPS endpoint (a Let's Encrypt renewal failure), and my NAS SMB share availability (a ZFS scrub was starving I/O). Grafana's node-exporter dashboard caught a slow disk filling up on my Nextcloud host before it became a crisis.
The combination is genuinely powerful. Uptime Kuma tells you something is wrong right now. Grafana tells you why it went wrong and when the trend started. Both signals together mean you spend less time firefighting and more time actually building stuff.
Next Steps
With your monitoring stack running, I'd suggest two immediate next steps. First, add cAdvisor as an additional scrape target in Prometheus — it gives you per-container CPU and memory metrics, which is invaluable when a misbehaving container is eating your host alive. Second, explore Grafana Alerting (separate from Uptime Kuma) for threshold-based alerts — for example, fire an alert when disk usage crosses 85%. Together these two additions turn your stack from a dashboard into a true observability platform.
If you're running this on a VPS rather than bare metal and want a reliable, cost-effective host for the stack, DigitalOcean Droplets offer dependable uptime with a 99.99% SLA, simple security tools, and predictable monthly pricing. Your monitoring host needs to be the most reliable machine in your fleet — it's hard to trust an uptime monitor that itself goes down.
Discussion