Chapter 1
Prerequisites and Target Architecture
Required Tools
Before starting, make sure you have installed and configured the following tools:
kubectl ≥ 1.28
Docker ≥ 24
Helm ≥ 3.12
k9s (optional)
kubeconfig configured
Target Architecture
The diagram below represents the architecture we will build:
# Target architecture
Internet
│
▼
[ LoadBalancer / Ingress Controller (Nginx) ]
│
├── /api → Service: app-service → Pod(s): app-deployment
├── / → Service: front-service → Pod(s): front-deployment
│
[ PersistentVolumeClaim ] → PostgreSQL StatefulSet
[ ConfigMap + Secret ] → environment variables
[ HPA ] → automatic pod scaling
[ Prometheus + Grafana ] → monitoring
Namespace Initialization
# Create a dedicated namespace for your application kubectl create namespace my-app # Set this namespace as default kubectl config set-context --current --namespace=my-app
Chapter 2
Containerization with Docker
Optimized Dockerfile (multi-stage)
Use a multi-stage build to reduce the final image size and avoid including build tools in production.
# ── Stage 1: build ─────────────────────────────── FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . RUN npm run build # ── Stage 2: production ────────────────────────── FROM node:20-alpine AS production WORKDIR /app ENV NODE_ENV=production COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules EXPOSE 3000 USER node CMD ["node", "dist/main.js"]
💡 Best practice: Always specify
USER node to avoid running the container as root.Build and Push to Registry
# Build the image docker build -t registry.example.com/my-app:v1.0.0 . # Push to your registry (Docker Hub, GHCR, ECR...) docker push registry.example.com/my-app:v1.0.0
.dockerignore
node_modules .git .env .env.* *.log dist coverage README.md
Chapter 3
Kubernetes Deployment
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: my-app namespace: my-app labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: registry.example.com/my-app:v1.0.0 ports: - containerPort: 3000 envFrom: - configMapRef: name: my-app-config - secretRef: name: my-app-secrets resources: requests: memory: "128Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m" livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 10 periodSeconds: 15 readinessProbe: httpGet: path: /ready port: 3000 initialDelaySeconds: 5 periodSeconds: 10
Service and Ingress
--- Service --- apiVersion: v1 kind: Service metadata: name: my-app-service spec: selector: app: my-app ports: - port: 80 targetPort: 3000 --- --- Ingress (Nginx Controller) --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / cert-manager.io/cluster-issuer: "letsencrypt-prod" spec: tls: - hosts: [app.example.com] secretName: my-app-tls rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: my-app-service port: number: 80
Apply Manifests
# Apply all YAML files kubectl apply -f k8s/ # Verify deployment kubectl get pods -n my-app kubectl get ingress -n my-app kubectl describe deployment my-app -n my-app
Chapter 4
Secrets and ConfigMaps Management
ConfigMap (non-sensitive variables)
apiVersion: v1 kind: ConfigMap metadata: name: my-app-config data: APP_ENV: "production" APP_PORT: "3000" DB_HOST: "postgres-service" REDIS_HOST: "redis-service"
Secret (sensitive variables)
⚠️ Never commit Secrets in plain text in your Git repository. Use Sealed Secrets or Vault.
# Create a secret via kubectl (values auto base64-encoded) kubectl create secret generic my-app-secrets \ --from-literal=DB_PASSWORD='myPassword' \ --from-literal=APP_KEY='base64:xxx...' \ --from-literal=JWT_SECRET='myJWTSecret' \ -n my-app # Verify kubectl get secret my-app-secrets -o yaml -n my-app
Sealed Secrets (recommended in production)
# Install Sealed Secrets Controller helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets helm install sealed-secrets sealed-secrets/sealed-secrets -n kube-system # Create a SealedSecret (encrypted, safe to commit) kubectl create secret generic my-secret --dry-run=client \ --from-literal=DB_PASSWORD='myPassword' -o yaml | \ kubeseal --format yaml > k8s/sealed-secret.yaml
Chapter 5
Scaling and High Availability
Horizontal Pod Autoscaler (HPA)
The HPA automatically adjusts the number of pods based on CPU/memory load.
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: my-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-app minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80
PodDisruptionBudget
apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-app-pdb spec: minAvailable: 2 selector: matchLabels: app: my-app
💡 The PDB guarantees that at least 2 pods remain available during a cluster update or maintenance.
Chapter 6
Monitoring with Prometheus & Grafana
Installation via Helm (kube-prometheus-stack)
# Add the repo helm repo add prometheus-community \ https://prometheus-community.github.io/helm-charts helm repo update # Install the complete stack (Prometheus + Grafana + AlertManager) helm install kube-prometheus prometheus-community/kube-prometheus-stack \ --namespace monitoring \ --create-namespace \ --set grafana.adminPassword='YourPassword' \ --set prometheus.prometheusSpec.retention=15d
Access Grafana
# Port-forward (development) kubectl port-forward svc/kube-prometheus-grafana 3000:80 -n monitoring # Default credentials: admin / YourPassword # Recommended dashboard: ID 15661 (Kubernetes Cluster)
ServiceMonitor to Scrape Your Application
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: my-app-monitor labels: release: kube-prometheus spec: selector: matchLabels: app: my-app endpoints: - port: http path: /metrics interval: 30s
Chapter 7
CI/CD Pipeline — GitHub Actions to Kubernetes
Complete Workflow (.github/workflows/deploy.yml)
name: Build & Deploy to Kubernetes on: push: branches: [main] env: REGISTRY: ghcr.io IMAGE_NAME: ${{ github.repository }} jobs: build-and-push: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Log in to GHCR uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Build and push Docker image uses: docker/build-push-action@v5 with: push: true tags: ghcr.io/${{ github.repository }}:${{ github.sha }} deploy: needs: build-and-push runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Configure kubectl uses: azure/k8s-set-context@v3 with: kubeconfig: ${{ secrets.KUBECONFIG }} - name: Update image tag & apply run: | sed -i "s|IMAGE_TAG|${{ github.sha }}|g" k8s/deployment.yaml kubectl apply -f k8s/ -n my-app kubectl rollout status deployment/my-app -n my-app
💡 Store your
KUBECONFIG as an encrypted GitHub secret. Never commit it.Chapter 8
Production Readiness Checklist
Before Deploying
- Docker image built with multi-stage, specific tag (not
:latest) - Environment variables in ConfigMap/Secret — never hardcoded in the image
requestsandlimitsdefined on all containerslivenessProbeandreadinessProbeconfigured- Secrets managed via Sealed Secrets or Vault
- TLS enabled via cert-manager (Let's Encrypt)
- HPA configured with min ≥ 2 replicas
- PodDisruptionBudget created
- Prometheus monitoring + alerts configured
- Rollback tested:
kubectl rollout undo deployment/my-app
Post-Deployment Verification Commands
# Pod status kubectl get pods -n my-app -w # Real-time logs kubectl logs -f deployment/my-app -n my-app # Cluster events kubectl get events -n my-app --sort-by='.lastTimestamp' # Rollback if issue kubectl rollout undo deployment/my-app -n my-app # Check HPA kubectl get hpa -n my-app
