Container Orchestration Beyond Docker: When to Consider Kubernetes for Self-Hosting

Container Orchestration Beyond Docker: When to Consider Kubernetes for Self-Hosting

I spent three years running everything through Docker Compose before I finally asked myself the right question: was I actually hitting Docker Compose's limits, or just thinking I had to move to Kubernetes because it sounded more "enterprise"? The truth is messier. Kubernetes is powerful, but it's also heavy. For most homelabs, it's overkill. But for some setups, it's exactly what you need. Let me walk you through how to tell the difference.

The Case for Staying with Docker Compose

Docker Compose is genuinely excellent at what it does: orchestrating a bounded set of containers on a single machine or a small handful of machines. When I run 8–12 services (Nextcloud, Vaultwarden, Immich, Jellyfin, AdGuard Home, Prometheus, Grafana, a media downloader), Docker Compose keeps them organized, declarative, and reproducible. My docker-compose.yml is about 400 lines. I can version it in Git, disaster-recover it in under an hour, and modify it without learning a new abstraction layer.

The operational overhead is minimal. I ssh in, run docker compose up -d, and if something breaks, I read the logs with docker compose logs -f service-name. The learning curve is shallow. The failure modes are predictable.

Docker Compose scales well to about 15–20 services before the file becomes unwieldy and resource management starts to feel ad-hoc. I think of Docker Compose as the right tool when you own one or two machines and you're not running workloads that need automatic failover or geographic distribution.

The Real Kubernetes Triggers

Kubernetes becomes attractive (not just flashy) when one of these is true:

For a hobbyist homelab with 2–3 machines? Kubernetes is probably overengineering. For a small lab backing 10+ services across 4+ nodes? Start thinking about it.

K3s: The Lightweight Path

If you do decide Kubernetes is worth learning, start with K3s, not full Kubernetes. K3s is a single-binary, production-grade Kubernetes distribution made by Rancher Labs. It's designed for resource-constrained environments: minimal memory footprint, no external database required (uses SQLite by default), and it installs in under five minutes.

I tested K3s on a Hetzner VPS (around $40/year entry level) and was surprised how smoothly it ran. A single 2-core, 4GB RAM node handled 8 containerized services with headroom to spare. That's the sweet spot: if you're renting a VPS instead of running hardware at home, K3s makes sense before full Kubernetes does.

Here's how I installed K3s on a fresh Ubuntu 22.04 VPS:

#!/bin/bash
# Install K3s (server node)
curl -sfL https://get.k3s.io | sh -
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl get nodes

# Verify the installation
kubectl get pods -A

That's it. K3s runs systemd-managed, auto-restarts on reboot, and the kubeconfig lives at /etc/rancher/k3s/k3s.yaml. To join another node as an agent:

#!/bin/bash
# On the server node, get the token
cat /var/lib/rancher/k3s/server/node-token

# On a second machine (agent), install K3s in agent mode
export K3S_URL=https://server-ip:6443
export K3S_TOKEN=your-token-from-above
curl -sfL https://get.k3s.io | sh -

Within seconds, kubectl get nodes on the server shows both machines. Containers start balancing across them automatically.

Tip: K3s uses Traefik as its default ingress controller and includes local-path-provisioner for persistent volumes. You can replace both, but the defaults are solid and keep your cluster minimal.

Minikube vs K3s vs Full Kubernetes

The landscape is crowded. Here's how I think about them:

The Decision Matrix

Here's my framework for deciding, distilled:

Your Setup Recommendation
Single machine, 5–12 services Docker Compose
Single machine, 15+ services, need scaling Docker Compose (still) or K3s (learning)
2–3 machines, services can tolerate brief downtime Docker Compose + custom restart logic
3+ machines, need automatic failover and rebalancing K3s
5+ machines, complex storage, networking, team environment Full Kubernetes or K3s with heavy customization

The Hidden Costs of Kubernetes

Everyone focuses on the power of Kubernetes and forgets to mention the operational tax. Here's what you actually pay:

Learning curve. Kubernetes introduces namespaces, services, ingresses, deployments, stateful sets, persistent volume claims, operators, and helm charts. That's not 2–3 new concepts; that's 10+. I spent three weeks before I could debug a pod crash without Googling.

Persistent storage is awkward. Docker Compose volumes are straightforward: a path on disk. Kubernetes persistent volumes require storage classes, claims, and often external storage backends (NFS, iSCSI, or CSI drivers). If you're running a homelab with a single NAS, Docker Compose's approach is cleaner.

Stateful services are messier. Running a database in Kubernetes isn't bad, but it's not trivial. StatefulSets, headless services, and persistent claims add friction. For a single-instance PostgreSQL or MySQL, Docker Compose is simpler and honestly just as reliable.

Resource overhead. K3s is lightweight, but a minimal Kubernetes cluster still uses more RAM and CPU than Docker Compose alone. Every pod gets its own network namespace. The kubelet, scheduler, and controller-manager run in the background. For a 2GB RAM machine, this matters.

Watch out: If you migrate to Kubernetes for the sake of it, you'll spend months debugging network policies, DNS resolution, and volume mounting issues instead of running your actual services. Don't let architecture porn steal your weekend.

A Real Hybrid: Compose + K3s Coexistence

Here's what I'm doing now: Docker Compose for my homelab (Nextcloud, Immich, a few smaller services), K3s on a rented VPS for stateless workloads (a web service, a bot, a scraper). Docker Compose handles the stateful, low-scale stuff where I want simplicity. K3s handles the distributed, public-facing services where I need uptime and scaling.

This is the pragmatic middle ground. You don't force everything into Kubernetes. You don't refuse to use it. You pick the right tool for each problem.

Next Steps

If you're happy with Docker Compose today, keep using it. But if you're running 3+ machines and want them to feel like one coherent infrastructure, spend a weekend spinning up K3s on a Hetzner VPS (they have renewal deals around $40/year for a 2-core instance) or a spare Raspberry Pi. Deploy a simple stateless service—a web app, a bot, something you don't mind breaking. Learn how namespaces, services, and ingresses work. Then decide: does Kubernetes solve a problem you actually have, or is it just interesting to learn?

Either way, you'll know. That's worth the weekend.

Discussion

```