Featured Article Containers Docker

Docker Containerization: From Basics to Production-Ready Images

Complete guide to Docker containerization with best practices for building optimized, secure images and running containers in production.

HA
Hari Prasad
October 06, 2024
5 min read ...
Financial Planning Tool

PPF Calculator

Calculate your Public Provident Fund returns with detailed projections and tax benefits. Plan your financial future with precision.

Try Calculator
Free Forever Secure
10K+
Users
4.9★
Rating
Career Tool

Resume Builder

Create professional DevOps resumes with modern templates. Showcase your skills, experience, and certifications effectively.

Build Resume
No Login Export PDF
15+
Templates
5K+
Created
Kubernetes Tool

EKS Pod Cost Calculator

Calculate Kubernetes pod costs on AWS EKS. Optimize resource allocation and reduce your cloud infrastructure expenses.

Calculate Costs
Accurate Real-time
AWS
EKS Support
$$$
Save Money
AWS Cloud Tool

AWS VPC Designer Pro

Design and visualize AWS VPC architectures with ease. Create production-ready network diagrams with subnets, route tables, and security groups in minutes.

Design VPC
Visual Editor Export IaC
Multi-AZ
HA Design
Pro
Features
Subnets Security Routing
Explore More

Discover My DevOps Journey

Explore my portfolio, read insightful blogs, learn from comprehensive courses, and leverage powerful DevOps tools—all in one place.

50+
Projects
100+
Blog Posts
10+
Courses
20+
Tools

Docker has revolutionized application deployment by standardizing how we package, distribute, and run applications. This comprehensive guide covers everything from Docker basics to advanced production patterns used by leading tech companies.

Why Docker?

Docker solves critical deployment challenges:

  • Consistency: “Works on my machine” becomes “works everywhere”
  • Isolation: Applications run in isolated environments
  • Portability: Run anywhere—laptop, data center, cloud
  • Efficiency: Lightweight compared to virtual machines
  • Scalability: Easy horizontal scaling
  • Developer Productivity: Fast local development environments

Installation

# Install Docker on Ubuntu
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER

# Install Docker on macOS
brew install --cask docker

# Verify installation
docker version
docker run hello-world

Docker Basics

Your First Container

# Run a container
docker run nginx

# Run in detached mode
docker run -d --name my-nginx nginx

# Run with port mapping
docker run -d -p 8080:80 --name web nginx

# Run with environment variables
docker run -d -e MYSQL_ROOT_PASSWORD=secret mysql:8.0

# Run with volume mount
docker run -d -v $(pwd)/data:/var/lib/mysql mysql:8.0

# Run with interactive terminal
docker run -it ubuntu:22.04 /bin/bash

# Execute command in running container
docker exec -it my-nginx bash

# View container logs
docker logs my-nginx
docker logs -f my-nginx  # Follow logs

# Stop and remove container
docker stop my-nginx
docker rm my-nginx

# List containers
docker ps          # Running only
docker ps -a       # All containers

Building Docker Images

Simple Dockerfile

# Dockerfile
FROM ubuntu:22.04

# Set metadata
LABEL maintainer="devops@example.com"
LABEL version="1.0"
LABEL description="My first Docker image"

# Install packages
RUN apt-get update && \
    apt-get install -y nginx && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

# Copy files
COPY index.html /var/www/html/

# Expose port
EXPOSE 80

# Start nginx
CMD ["nginx", "-g", "daemon off;"]
# Build image
docker build -t my-nginx:1.0 .

# Build with build arguments
docker build --build-arg VERSION=1.0 -t my-nginx:1.0 .

# List images
docker images

# Remove image
docker rmi my-nginx:1.0

# Tag image
docker tag my-nginx:1.0 myregistry.com/my-nginx:1.0

# Push to registry
docker push myregistry.com/my-nginx:1.0

Multi-Stage Builds

Optimize image size with multi-stage builds:

# Dockerfile for Node.js application
# Stage 1: Build
FROM node:18-alpine AS builder

WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy source code
COPY . .

# Build application
RUN npm run build

# Stage 2: Production
FROM node:18-alpine AS production

WORKDIR /app

# Copy only necessary files from builder
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./

# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001

USER nodejs

EXPOSE 3000

CMD ["node", "dist/index.js"]

Go Application Example

# Dockerfile for Go application
# Stage 1: Build
FROM golang:1.21-alpine AS builder

WORKDIR /app

# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download

# Copy source code
COPY . .

# Build binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

# Stage 2: Production
FROM scratch

# Copy CA certificates for HTTPS
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/

# Copy binary
COPY --from=builder /app/main /main

EXPOSE 8080

ENTRYPOINT ["/main"]

Python Application Example

# Dockerfile for Python application
FROM python:3.11-slim AS base

# Set environment variables
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

WORKDIR /app

# Stage 1: Dependencies
FROM base AS dependencies

# Install system dependencies
RUN apt-get update && \
    apt-get install -y --no-install-recommends gcc && \
    rm -rf /var/lib/apt/lists/*

# Copy requirements
COPY requirements.txt .

# Install Python dependencies
RUN pip install --user -r requirements.txt

# Stage 2: Production
FROM base AS production

# Copy Python dependencies from builder
COPY --from=dependencies /root/.local /root/.local

# Update PATH
ENV PATH=/root/.local/bin:$PATH

# Copy application code
COPY . .

# Create non-root user
RUN useradd -m -u 1000 appuser && \
    chown -R appuser:appuser /app

USER appuser

EXPOSE 8000

CMD ["gunicorn", "--bind", "0.0.0.0:8000", "app:app"]

Docker Compose

Manage multi-container applications:

# docker-compose.yml
version: '3.8'

services:
  web:
    build:
      context: ./web
      dockerfile: Dockerfile
    image: myapp-web:latest
    container_name: myapp-web
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DB_HOST=db
      - DB_PORT=5432
      - DB_NAME=myapp
      - DB_USER=postgres
      - DB_PASSWORD_FILE=/run/secrets/db_password
      - REDIS_URL=redis://redis:6379
    volumes:
      - ./web/uploads:/app/uploads
      - web-logs:/app/logs
    secrets:
      - db_password
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started
    restart: unless-stopped
    networks:
      - frontend
      - backend
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  api:
    build:
      context: ./api
      dockerfile: Dockerfile
    image: myapp-api:latest
    container_name: myapp-api
    ports:
      - "8000:8000"
    environment:
      - DATABASE_URL=postgresql://postgres:${DB_PASSWORD}@db:5432/myapp
      - REDIS_URL=redis://redis:6379
      - JWT_SECRET_FILE=/run/secrets/jwt_secret
    secrets:
      - jwt_secret
      - db_password
    depends_on:
      db:
        condition: service_healthy
    restart: unless-stopped
    networks:
      - backend
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 2G
        reservations:
          cpus: '1.0'
          memory: 1G

  db:
    image: postgres:15-alpine
    container_name: myapp-db
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD_FILE=/run/secrets/db_password
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./db/init:/docker-entrypoint-initdb.d:ro
    secrets:
      - db_password
    restart: unless-stopped
    networks:
      - backend
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: myapp-redis
    command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
    volumes:
      - redis-data:/data
    restart: unless-stopped
    networks:
      - backend
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 5

  nginx:
    image: nginx:alpine
    container_name: myapp-nginx
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
      - web-static:/usr/share/nginx/html:ro
    depends_on:
      - web
      - api
    restart: unless-stopped
    networks:
      - frontend

volumes:
  postgres-data:
    driver: local
  redis-data:
    driver: local
  web-logs:
    driver: local
  web-static:
    driver: local

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true

secrets:
  db_password:
    file: ./secrets/db_password.txt
  jwt_secret:
    file: ./secrets/jwt_secret.txt

Docker Compose Commands

# Start services
docker-compose up -d

# View logs
docker-compose logs -f
docker-compose logs -f web

# Scale services
docker-compose up -d --scale api=3

# Stop services
docker-compose stop

# Stop and remove containers
docker-compose down

# Stop and remove everything including volumes
docker-compose down -v

# Rebuild images
docker-compose build
docker-compose build --no-cache

# Execute command in service
docker-compose exec web bash
docker-compose exec db psql -U postgres

# View running services
docker-compose ps

Best Practices for Production

1. Use Specific Base Image Tags

# ❌ Bad - uses latest
FROM node:alpine

# ✅ Good - specific version
FROM node:18.17.1-alpine3.18

2. Minimize Layers

# ❌ Bad - multiple layers
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y curl
RUN apt-get clean

# ✅ Good - single layer
RUN apt-get update && \
    apt-get install -y \
        nginx \
        curl && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

3. Use .dockerignore

# .dockerignore
node_modules
npm-debug.log
.git
.gitignore
.env
.env.local
*.md
Dockerfile
docker-compose.yml
.dockerignore
coverage
.vscode
.idea
dist
build
*.log

4. Run as Non-Root User

# Create and use non-root user
RUN addgroup -g 1001 -S appgroup && \
    adduser -S appuser -u 1001 -G appgroup

# Set ownership
RUN chown -R appuser:appgroup /app

# Switch to non-root user
USER appuser

5. Use Health Checks

HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
  CMD curl -f http://localhost:8000/health || exit 1

6. Optimize Image Size

# Use slim or alpine variants
FROM python:3.11-slim

# Use multi-stage builds
FROM node:18-alpine AS builder
# ... build steps
FROM node:18-alpine AS production
COPY --from=builder /app/dist ./dist

# Remove unnecessary files
RUN rm -rf /var/lib/apt/lists/* \
           /tmp/* \
           /var/tmp/*

7. Leverage Build Cache

# Copy dependency files first
COPY package*.json ./
RUN npm ci

# Copy source code last (changes more frequently)
COPY . .

Security Best Practices

Scan Images for Vulnerabilities

# Using Trivy
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
  aquasec/trivy:latest image myapp:latest

# Using Snyk
snyk container test myapp:latest

# Using Docker Scout
docker scout cves myapp:latest

Sign Images

# Enable Docker Content Trust
export DOCKER_CONTENT_TRUST=1

# Sign and push image
docker push myregistry.com/myapp:1.0

# Pull signed image
docker pull myregistry.com/myapp:1.0

Use Secret Management

# ❌ Bad - secrets in ENV
ENV DB_PASSWORD=supersecret

# ✅ Good - use Docker secrets or external secret managers
ENV DB_PASSWORD_FILE=/run/secrets/db_password

Monitoring and Logging

Container Metrics

# View container stats
docker stats

# View container resource usage
docker stats --no-stream

# Inspect container
docker inspect my-container

# View container processes
docker top my-container

Centralized Logging

# docker-compose.yml with logging
services:
  web:
    image: myapp:latest
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

  # Or use syslog
  api:
    image: myapi:latest
    logging:
      driver: "syslog"
      options:
        syslog-address: "tcp://192.168.0.42:514"

Docker Registry

Private Registry Setup

# Run private registry
docker run -d \
  -p 5000:5000 \
  --name registry \
  -v /mnt/registry:/var/lib/registry \
  registry:2

# Tag image for private registry
docker tag myapp:latest localhost:5000/myapp:latest

# Push to private registry
docker push localhost:5000/myapp:latest

# Pull from private registry
docker pull localhost:5000/myapp:latest

Harbor Registry

# docker-compose.yml for Harbor
version: '3.8'

services:
  registry:
    image: goharbor/harbor-registryctl:v2.9.0
    container_name: registry
    restart: always
    volumes:
      - /data/registry:/storage
    environment:
      - REGISTRY_HTTP_SECRET=secret123

  harbor-core:
    image: goharbor/harbor-core:v2.9.0
    container_name: harbor-core
    depends_on:
      - registry
    restart: always
    volumes:
      - /data/ca_download:/etc/core/ca

Docker Networking

# Create custom network
docker network create my-network

# Create network with specific subnet
docker network create --subnet=172.18.0.0/16 my-network

# Connect container to network
docker network connect my-network my-container

# Disconnect container from network
docker network disconnect my-network my-container

# Inspect network
docker network inspect my-network

# List networks
docker network ls

# Remove network
docker network rm my-network

Troubleshooting

# View container logs
docker logs -f --tail 100 my-container

# Execute shell in running container
docker exec -it my-container sh

# Copy files from container
docker cp my-container:/app/logs/app.log ./app.log

# Copy files to container
docker cp ./config.yml my-container:/app/config.yml

# View container filesystem changes
docker diff my-container

# Save container as image
docker commit my-container my-image:debug

# Export container filesystem
docker export my-container > container.tar

# Import container filesystem
docker import container.tar my-image:imported

# Prune unused resources
docker system prune -a
docker volume prune
docker network prune

CI/CD Integration

GitHub Actions

# .github/workflows/docker-build.yml
name: Build and Push Docker Image

on:
  push:
    branches: [main]
    tags: ['v*']

jobs:
  build:
    runs-on: ubuntu-latest
    
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    
    - name: Log in to Docker Hub
      uses: docker/login-action@v2
      with:
        username: $
        password: $
    
    - name: Extract metadata
      id: meta
      uses: docker/metadata-action@v4
      with:
        images: myuser/myapp
        tags: |
          type=ref,event=branch
          type=semver,pattern=
          type=semver,pattern=.
    
    - name: Build and push
      uses: docker/build-push-action@v4
      with:
        context: .
        push: true
        tags: $
        labels: $
        cache-from: type=registry,ref=myuser/myapp:buildcache
        cache-to: type=registry,ref=myuser/myapp:buildcache,mode=max

Performance Optimization

Build Optimization

# Use BuildKit features
# syntax=docker/dockerfile:1.4

FROM node:18-alpine

WORKDIR /app

# Cache mount for npm
RUN --mount=type=cache,target=/root/.npm \
    npm install -g npm@latest

# Bind mount package files
RUN --mount=type=bind,source=package.json,target=package.json \
    --mount=type=bind,source=package-lock.json,target=package-lock.json \
    --mount=type=cache,target=/root/.npm \
    npm ci --only=production

COPY . .

CMD ["node", "server.js"]

Runtime Optimization

# Limit container resources
docker run -d \
  --cpus="1.5" \
  --memory="512m" \
  --memory-swap="1g" \
  --name myapp \
  myapp:latest

# Set restart policy
docker run -d \
  --restart=unless-stopped \
  --name myapp \
  myapp:latest

Checklist for Production-Ready Images

✅ Use specific base image versions
✅ Implement multi-stage builds
✅ Run as non-root user
✅ Add health checks
✅ Optimize layer caching
✅ Use .dockerignore
✅ Scan for vulnerabilities
✅ Keep images small (<500MB ideally)
✅ Use appropriate logging drivers
✅ Implement proper signal handling
✅ Document with LABEL instructions
✅ Version images with semantic tags

Conclusion

Docker containerization is essential for modern application deployment. By following these best practices for building optimized, secure images and properly managing containers, you’ll create robust, production-ready containerized applications that scale efficiently.

Resources


Questions about Docker? Share your experiences in the comments!

HA
Author

Hari Prasad

Seasoned DevOps Lead with 11+ years of expertise in cloud infrastructure, CI/CD automation, and infrastructure as code. Proven track record in designing scalable, secure systems on AWS using Terraform, Kubernetes, Jenkins, and Ansible. Strong leadership in mentoring teams and implementing cost-effective cloud solutions.

Continue Reading

DevOps Tools & Calculators Free Tools

Power up your DevOps workflow with these handy tools

Enjoyed this article?

Explore more DevOps insights, tutorials, and best practices

View All Articles