Helm: The Kubernetes Package Manager We Needed
Managing Kubernetes YAML files was getting painful. Helm solved that problem.
The Problem
We had 20+ microservices, each with:
- Deployment
- Service
- Ingress
- ConfigMap
- Secret
That’s 100+ YAML files. Copy-paste everywhere. Nightmare to maintain.
What is Helm?
Helm is a package manager for Kubernetes. Think apt/yum for Kubernetes.
A Helm “chart” is a package containing:
- Templates for Kubernetes resources
- Default values
- Documentation
Basic Structure
mychart/
├── Chart.yaml # Chart metadata
├── values.yaml # Default values
├── templates/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── ingress.yaml
└── README.md
Templating
Instead of hardcoded YAML:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
template:
spec:
containers:
- name: myapp
image: myapp:1.0
Use templates:
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: {{ .Values.name }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
With values:
# values.yaml
name: myapp
replicaCount: 3
image:
repository: myapp
tag: "1.0"
Installing a Chart
# Install from local chart
helm install myapp ./mychart
# Install with custom values
helm install myapp ./mychart --set replicaCount=5
# Install with values file
helm install myapp ./mychart -f production-values.yaml
Upgrading
# Upgrade to new version
helm upgrade myapp ./mychart --set image.tag=2.0
# Rollback if something breaks
helm rollback myapp 1
Our Chart Structure
We created a base chart for all services:
base-service/
├── Chart.yaml
├── values.yaml
├── templates/
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ ├── hpa.yaml # Horizontal Pod Autoscaler
│ └── servicemonitor.yaml # Prometheus monitoring
Each service has its own values:
# api-service/values.yaml
name: api
replicaCount: 3
image:
repository: registry.example.com/api
tag: "1.5.0"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
Environment-Specific Values
We have different values for each environment:
# Development
helm install api ./base-service -f api/values.yaml -f api/dev-values.yaml
# Staging
helm install api ./base-service -f api/values.yaml -f api/staging-values.yaml
# Production
helm install api ./base-service -f api/values.yaml -f api/prod-values.yaml
Helm Hooks
Run jobs before/after deployment:
# templates/db-migration.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Values.name }}-migration
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: migration
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
command: ["./migrate"]
restartPolicy: Never
This runs database migrations before upgrading the app.
Dependencies
Charts can depend on other charts:
# Chart.yaml
dependencies:
- name: postgresql
version: 8.6.4
repository: https://charts.bitnami.com/bitnami
- name: redis
version: 10.5.7
repository: https://charts.bitnami.com/bitnami
helm dependency update
helm install myapp ./mychart
Testing Charts
# Dry run - see what would be deployed
helm install myapp ./mychart --dry-run --debug
# Template - render templates locally
helm template myapp ./mychart
# Lint - check for errors
helm lint ./mychart
Helm Repositories
We run a private Helm repository:
# Add repository
helm repo add mycompany https://charts.mycompany.com
# Search charts
helm search repo mycompany
# Install from repository
helm install api mycompany/base-service
CI/CD Integration
Our Jenkins pipeline:
stage('Deploy to Staging') {
steps {
sh """
helm upgrade --install api ./charts/base-service \
-f api/values.yaml \
-f api/staging-values.yaml \
--set image.tag=${BUILD_NUMBER} \
--namespace staging \
--wait
"""
}
}
Lessons Learned
1. Use Semantic Versioning
Version your charts properly:
- Major: Breaking changes
- Minor: New features
- Patch: Bug fixes
2. Document Values
Add comments to values.yaml:
# Number of replicas
replicaCount: 3
# Docker image configuration
image:
# Image repository
repository: myapp
# Image tag
tag: "1.0"
3. Use Helpers
Create reusable templates:
# templates/_helpers.tpl
{{- define "mychart.labels" -}}
app: {{ .Values.name }}
version: {{ .Values.image.tag }}
{{- end -}}
# templates/deployment.yaml
metadata:
labels:
{{- include "mychart.labels" . | nindent 4 }}
4. Validate Values
Use JSON Schema to validate values:
# values.schema.json
{
"type": "object",
"required": ["name", "image"],
"properties": {
"name": {
"type": "string"
},
"replicaCount": {
"type": "integer",
"minimum": 1
}
}
}
The Results
Before Helm:
- 100+ YAML files
- Copy-paste errors
- Hard to maintain
- Deployments took 30 minutes
After Helm:
- 1 base chart
- 20 values files
- Easy to maintain
- Deployments take 5 minutes
Would We Use Helm Again?
Absolutely. Helm has made managing Kubernetes deployments much easier.
Alternatives
- Kustomize: Built into kubectl, simpler but less powerful
- Jsonnet: More flexible but steeper learning curve
We chose Helm because it’s the most popular and has the best ecosystem.
Questions? Ask away!