Deploying to Kubernetes from Jenkins Pipeline
We recently set up automated deployments to Kubernetes from Jenkins. Here’s how we did it.
The Goal
Push code → Jenkins builds → Docker image → Deploy to Kubernetes → Profit
Prerequisites
- Jenkins with Pipeline plugin
- Kubernetes cluster
- Docker registry
- kubectl configured
The Jenkinsfile
pipeline {
agent any
environment {
DOCKER_REGISTRY = 'registry.example.com'
IMAGE_NAME = 'myapp'
K8S_NAMESPACE = 'production'
}
stages {
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Docker Build') {
steps {
script {
def imageTag = "${env.BUILD_NUMBER}"
sh """
docker build -t ${DOCKER_REGISTRY}/${IMAGE_NAME}:${imageTag} .
docker push ${DOCKER_REGISTRY}/${IMAGE_NAME}:${imageTag}
"""
}
}
}
stage('Deploy to Kubernetes') {
steps {
script {
def imageTag = "${env.BUILD_NUMBER}"
sh """
kubectl set image deployment/myapp \
myapp=${DOCKER_REGISTRY}/${IMAGE_NAME}:${imageTag} \
-n ${K8S_NAMESPACE}
kubectl rollout status deployment/myapp -n ${K8S_NAMESPACE}
"""
}
}
}
}
post {
success {
slackSend color: 'good', message: "Deployed ${env.BUILD_NUMBER} to production"
}
failure {
slackSend color: 'danger', message: "Deployment ${env.BUILD_NUMBER} failed"
}
}
}
Setting Up kubectl in Jenkins
Create a Kubernetes service account for Jenkins:
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins
namespace: default
Get the token:
kubectl get secret $(kubectl get sa jenkins -o jsonpath='{.secrets[0].name}') \
-o jsonpath='{.data.token}' | base64 --decode
Configure kubectl in Jenkins:
kubectl config set-cluster k8s --server=https://k8s-api:6443
kubectl config set-credentials jenkins --token=<token>
kubectl config set-context jenkins --cluster=k8s --user=jenkins
kubectl config use-context jenkins
Better: Using Helm
For more complex deployments, use Helm:
stage('Deploy with Helm') {
steps {
script {
def imageTag = "${env.BUILD_NUMBER}"
sh """
helm upgrade --install myapp ./helm/myapp \
--set image.tag=${imageTag} \
--namespace ${K8S_NAMESPACE} \
--wait
"""
}
}
}
Rollback on Failure
Add automatic rollback:
stage('Deploy') {
steps {
script {
try {
sh "kubectl set image deployment/myapp ..."
sh "kubectl rollout status deployment/myapp"
} catch (Exception e) {
sh "kubectl rollout undo deployment/myapp"
throw e
}
}
}
}
Blue-Green Deployment
For zero-downtime deployments:
stage('Blue-Green Deploy') {
steps {
script {
def imageTag = "${env.BUILD_NUMBER}"
// Deploy to green
sh """
kubectl apply -f k8s/deployment-green.yaml
kubectl set image deployment/myapp-green \
myapp=${DOCKER_REGISTRY}/${IMAGE_NAME}:${imageTag}
kubectl rollout status deployment/myapp-green
"""
// Switch traffic
sh "kubectl patch service myapp -p '{\"spec\":{\"selector\":{\"version\":\"green\"}}}'"
// Delete blue
sh "kubectl delete deployment myapp-blue"
}
}
}
Lessons Learned
- Always wait for rollout: Use
kubectl rollout status - Tag images with build number: Never use
latest - Implement rollback: Things will fail
- Use namespaces: Separate dev/staging/prod
- Monitor deployments: Set up alerts
What’s Next
We’re exploring:
- GitOps with Flux
- Canary deployments with Istio
- Automated testing in staging before prod
This setup has been solid for us. Deployments are fast, reliable, and fully automated.
Questions? Let me know!