Everyone’s talking about Kubernetes lately. Google open-sourced it a couple years ago, and it’s been gaining momentum. I spent the last two weeks setting up a test cluster to see what the fuss is about.

What Is Kubernetes Anyway?

It’s a container orchestration platform. Think of it as a way to manage Docker containers across multiple machines. You tell Kubernetes “I want 5 instances of this container running,” and it makes it happen. If a container dies, Kubernetes restarts it. If a node dies, Kubernetes moves the containers to healthy nodes.

At least, that’s the theory.

Setting Up a Cluster

I started with kubeadm, which is supposed to make setup easy. Spoiler: it’s not that easy.

# On master node
kubeadm init --pod-network-cidr=10.244.0.0/16

# On worker nodes
kubeadm join --token <token> <master-ip>:6443

Sounds simple, right? But then you need to:

  • Set up a pod network (I used Flannel)
  • Configure kubectl
  • Deal with RBAC permissions
  • Figure out why nodes are showing as “NotReady”

Took me a full day to get a 3-node cluster running. Docker Swarm was way easier to set up, just saying.

Deploying an Application

Once the cluster was up, deploying was actually pretty straightforward:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app:1.0
        ports:
        - containerPort: 8080
kubectl create -f deployment.yaml
kubectl expose deployment my-app --type=LoadBalancer --port=80 --target-port=8080

And boom, you have 3 replicas of your app running with a load balancer in front. That’s actually pretty cool.

The Good Parts

Self-Healing

I killed a pod to test self-healing:

kubectl delete pod my-app-12345

Kubernetes immediately spun up a new one. Didn’t even notice downtime.

Rolling Updates

Updating to a new version is smooth:

kubectl set image deployment/my-app my-app=my-app:2.0

It gradually replaces old pods with new ones. Zero downtime deployments out of the box.

Declarative Configuration

Everything is YAML. You describe what you want, not how to do it. Version control your infrastructure.

The Pain Points

Complexity

Kubernetes has a steep learning curve. Pods, ReplicaSets, Deployments, Services, Ingress, ConfigMaps, Secrets… there’s a lot to learn.

I’m still not 100% sure when to use a Deployment vs a StatefulSet vs a DaemonSet.

Networking

Pod networking is confusing. I spent hours debugging why pods couldn’t talk to each other. Turned out to be a Flannel configuration issue.

Also, getting external traffic into the cluster is more complicated than it should be. LoadBalancer type services only work on cloud providers. For bare metal, you need to set up an Ingress controller.

Resource Usage

Kubernetes itself uses a fair amount of resources. My 3-node cluster (each node has 2GB RAM) is using about 1GB just for Kubernetes components. That’s before running any actual applications.

Documentation

The docs are… okay. They cover the basics, but a lot of advanced topics are poorly documented. I’ve learned more from blog posts and Stack Overflow than the official docs.

Is It Production Ready?

Maybe? It depends on your use case.

Use Kubernetes if:

  • You’re running on a cloud provider (GKE, AWS, Azure)
  • You have complex microservices that need orchestration
  • You have a team that can dedicate time to learning it
  • You need advanced features like auto-scaling, rolling updates, etc.

Don’t use Kubernetes if:

  • You have a simple monolithic app
  • You’re running on bare metal without dedicated ops team
  • You just want to run a few Docker containers (use Docker Swarm instead)

For us, I think it’s too early. We’re a small team and don’t have the bandwidth to become Kubernetes experts. Docker Swarm gives us 80% of what we need with 20% of the complexity.

But I can see why big companies are adopting it. If you have dozens of microservices and need sophisticated orchestration, Kubernetes makes sense.

What I’m Doing Next

I’m going to keep the test cluster running and experiment with:

  • Helm for package management
  • Persistent volumes
  • Horizontal pod autoscaling
  • Monitoring with Prometheus

Maybe in 6 months we’ll be ready to use it in production. Or maybe we’ll stick with Swarm. Time will tell.

Resources

If you want to learn Kubernetes:

Anyone else playing with Kubernetes? What’s your experience been?