Tiny Bare-metal Kubernetes Cluster Setup

A practical guide to building a low-cost.

JR

2 minute read

A practical guide to building a low-cost, bare-metal Kubernetes cluster for hands-on learning with real-world tradeoffs and troubleshooting tips.

Why This Matters

Running Kubernetes on physical hardware teaches cluster lifecycle management, networking quirks, and storage pitfalls that cloud abstracts away. It’s invaluable for certs and debugging production-like failures.

Hardware Selection

Avoid Raspberry Pis for general learning—ARM compatibility issues (e.g., Fluentbit, some CRDs) and non-standard kernel page sizes waste time. Instead:

  • Mini PCs: Target Intel NUCs, Crestron UC-Engines, or similar x86_64 devices ($40–60/unit).
  • Specs: 8GB RAM minimum per node (control + 2 workers), 64-bit CPUs, SSD storage.
  • Why: Better driver support, wider container compatibility, and closer parity with cloud/x86 clusters.

OS and Prerequisites

  1. Install a Kubernetes-supported OS (Ubuntu 22.04 LTS recommended).
  2. Disable swap:
    sudo swapoff -a && sudo sed -i '/swap/d' /etc/fstab  
    
  3. Enable kernel modules:
    sudo modprobe br_netfilter && sudo sysctl -w net.bridge.bridge-nf-call-iptables=1  
    
  4. Install container runtime (Docker or containerd):
    # containerd example  
    sudo apt install -y containerd && sudo mkdir -p /etc/containerd  
    curl -L --output /etc/containerd/config.toml https://raw.githubusercontent.com/containerd/project/main/releases/v1.6.0/config-linux.toml  
    sudo systemctl restart containerd  
    

Cluster Setup Workflow

  1. Control Plane:

    sudo kubeadm init --pod-network-cidr=10.244.0.0/16  
    

    Follow prompts to set up kubeconfig.

  2. Install CNI:

    kubectl apply -f https://raw.githubusercontent.com/k8s-cloud-provider/kubeadm/master/docs/pod-network/addons/flannel.yaml  
    
  3. Join Workers:
    Use the kubeadm join command from control plane output. Verify nodes:

    kubectl get nodes  
    

Tooling

  • kubeadm: Cluster bootstrapping (simplest path).
  • kops: Advanced cluster management (overkill for tiny clusters).
  • Helm: Package management for testing workloads.
  • Prometheus/Grafana: Observe cluster health (deploy via Helm).

Policy Example

Maintenance Policy:

  • Rotate certs every 365 days (kubeadm automates this).
  • Reboot nodes monthly to catch boot config drift.
  • Backup etcd weekly:
    kubectl exec -it etcd-0 -- etcdctl snapshot save /snapshot.db  
    

Tradeoffs

  • x86 vs. ARM: Mini PCs avoid ARM’s niche issues but use more power.
  • Cost vs. Scale: 3 nodes are enough for learning but won’t handle large workloads.
  • Fragility: Physical hardware risks (e.g., SD card corruption on Pis) require backups.

Troubleshooting

  • Nodes Not Ready:
    kubectl describe node <node-name>  
    journalctl -u containerd -n 10m  
    
  • Networking Issues:
    ip a && ip route && kubectl get pods -n kube-system  
    
  • CNI Failures:
    kubectl describe pod -n kube-system <cni-pod-name>  
    

Conclusion

Start small, break things, fix them. A bare-metal cluster forces you to learn the hard way—exactly what you need for certs and real-world resilience. Prioritize x86 hardware, automate backups, and document every failure.

Source thread: Building a Tiny Bare-Metal K8S cluster for self learning?

comments powered by Disqus