Kubernetes Deployment Guide

Complete guide to deploying and managing applications on Kubernetes clusters

Updated Jan 20, 2024
15 min read
advanced
devops

Kubernetes Deployment Guide

This comprehensive guide covers everything you need to know about deploying and managing applications on Kubernetes clusters. From basic concepts to advanced deployment strategies, this documentation will help you master Kubernetes operations.

Table of Contents

Prerequisites

Before diving into Kubernetes deployment, ensure you have:

  • Basic understanding of containerization (Docker)
  • Familiarity with YAML configuration files
  • Access to a Kubernetes cluster (local or cloud)
  • kubectl command-line tool installed

Installing kubectl

# macOS
brew install kubectl

# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

# Windows
choco install kubernetes-cli

Cluster Setup

Local Development with Minikube

# Install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Start Minikube
minikube start

# Verify cluster status
kubectl cluster-info

Cloud Provider Setup

Amazon EKS

# Create EKS cluster
eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name workers --node-type t3.medium --nodes 3

# Update kubeconfig
aws eks update-kubeconfig --region us-west-2 --name my-cluster

Google GKE

# Create GKE cluster
gcloud container clusters create my-cluster --zone us-central1-a --num-nodes 3

# Get credentials
gcloud container clusters get-credentials my-cluster --zone us-central1-a

Application Deployment

Basic Pod Deployment

Create a simple pod deployment:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.21
    ports:
    - containerPort: 80

Deploy the pod:

kubectl apply -f pod.yaml
kubectl get pods

Deployment with ReplicaSet

For production applications, use Deployments:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

Rolling Updates

Update your application with zero downtime:

# Update image
kubectl set image deployment/nginx-deployment nginx=nginx:1.22

# Check rollout status
kubectl rollout status deployment/nginx-deployment

# Rollback if needed
kubectl rollout undo deployment/nginx-deployment

Service Management

ClusterIP Service

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

LoadBalancer Service

# loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-loadbalancer
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer

Ingress Controller

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: nginx.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80

Scaling and Updates

Horizontal Pod Autoscaler

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Vertical Pod Autoscaler

# vpa.yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: nginx-vpa
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deployment
  updatePolicy:
    updateMode: "Auto"

Monitoring and Logging

Prometheus Monitoring

# prometheus-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
    scrape_configs:
    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod

Grafana Dashboard

# grafana-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:latest
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: grafana-storage
          mountPath: /var/lib/grafana
      volumes:
      - name: grafana-storage
        emptyDir: {}

Troubleshooting

Common Commands

# Get pod logs
kubectl logs <pod-name>

# Describe pod for events
kubectl describe pod <pod-name>

# Get pod status
kubectl get pods -o wide

# Check node resources
kubectl top nodes

# Check pod resources
kubectl top pods

# Debug pod
kubectl exec -it <pod-name> -- /bin/bash

Debugging Network Issues

# Check service endpoints
kubectl get endpoints

# Test service connectivity
kubectl run debug --image=busybox -it --rm -- nslookup nginx-service

# Check ingress status
kubectl describe ingress nginx-ingress

Performance Optimization

  1. Resource Requests and Limits: Always set appropriate resource requests and limits
  2. Node Affinity: Use node affinity to place pods on specific nodes
  3. Pod Disruption Budgets: Protect critical workloads during maintenance
  4. Network Policies: Implement network policies for security

Best Practices

Security

  • Use RBAC for access control
  • Enable Pod Security Standards
  • Scan container images for vulnerabilities
  • Use secrets for sensitive data

Resource Management

  • Set resource requests and limits
  • Use Quality of Service classes
  • Implement proper monitoring
  • Plan for capacity management

Deployment Strategies

  • Use rolling updates for zero downtime
  • Implement canary deployments for testing
  • Use blue-green deployments for critical updates
  • Plan rollback strategies

Advanced Topics

Custom Resource Definitions (CRDs)

# crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: myresources.example.com
spec:
  group: example.com
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
        properties:
          spec:
            type: object
            properties:
              replicas:
                type: integer
                minimum: 1
                maximum: 10
  scope: Namespaced
  names:
    plural: myresources
    singular: myresource
    kind: MyResource

Operators

# operator-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-operator
  template:
    metadata:
      labels:
        app: my-operator
    spec:
      containers:
      - name: operator
        image: my-operator:latest
        env:
        - name: WATCH_NAMESPACE
          value: ""
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name

Conclusion

Kubernetes provides a powerful platform for deploying and managing containerized applications at scale. By following the practices outlined in this guide, you can build robust, scalable, and maintainable applications on Kubernetes.

For more advanced topics and real-world examples, check out our additional resources and community contributions.

Found this helpful?

Help us improve this documentation by sharing your feedback or suggesting improvements.