Introduction
Docker has revolutionized how we develop, deploy, and manage applications. Whether you’re a complete beginner or looking to solidify your containerization knowledge, this comprehensive guide will take you from zero to production-ready Docker skills.
By the end of this tutorial, you’ll understand containers, create your own Docker images, orchestrate multi-container applications, and deploy to production environments.
Table of Contents
- What is Docker?
- Installation and Setup
- Docker Fundamentals
- Working with Images
- Container Management
- Docker Compose
- Production Best Practices
- Real-World Examples
What is Docker?
Understanding Containerization
Docker is a containerization platform that packages applications and their dependencies into lightweight, portable containers. Think of containers as shipping containers for software - they ensure your application runs consistently anywhere.
Benefits of Docker
- Consistency: “It works on my machine” becomes “It works everywhere”
- Isolation: Applications run independently without conflicts
- Efficiency: Containers are lighter than virtual machines
- Scalability: Easy horizontal scaling and orchestration
- Development Speed: Faster setup and deployment cycles
Docker vs Virtual Machines
Traditional VMs:
┌─────────────┬─────────────┬─────────────┐
│ App A │ App B │ App C │
├─────────────┼─────────────┼─────────────┤
│ Guest OS │ Guest OS │ Guest OS │
├─────────────┼─────────────┼─────────────┤
│ Hypervisor │
├─────────────────────────────────────────┤
│ Host OS │
└─────────────────────────────────────────┘
Docker Containers:
┌─────────────┬─────────────┬─────────────┐
│ App A │ App B │ App C │
├─────────────┼─────────────┼─────────────┤
│ Docker Engine │
├─────────────────────────────────────────┤
│ Host OS │
└─────────────────────────────────────────┘
Installation and Setup
Windows Installation
Download Docker Desktop:
- Visit docker.com
- Download Docker Desktop for Windows
- Requires Windows 10/11 Pro, Enterprise, or Education
Enable WSL 2 (recommended):
# In PowerShell as Administrator
wsl --install
wsl --set-default-version 2
- Install and Configure:
# Verify installation
docker --version
docker-compose --version
# Test with hello world
docker run hello-world
macOS Installation
# Using Homebrew
brew install --cask docker
# Or download from docker.com
# Verify installation
docker --version
# Test installation
docker run hello-world
Linux Installation (Ubuntu)
# Update package index
sudo apt-get update
# Install dependencies
sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# Add user to docker group
sudo usermod -aG docker $USER
newgrp docker
# Verify installation
docker --version
docker run hello-world
Docker Fundamentals
Core Concepts
Images: Read-only templates for creating containers Containers: Running instances of images Dockerfile: Instructions for building images Registry: Storage for Docker images (Docker Hub, ECR, etc.)
Basic Commands
# Check Docker version
docker --version
docker version
# View system information
docker info
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# List images
docker images
# Download an image
docker pull nginx
# Run a container
docker run nginx
# Run container in background
docker run -d nginx
# Run with port mapping
docker run -d -p 8080:80 nginx
# Run with name
docker run -d -p 8080:80 --name my-nginx nginx
# Stop a container
docker stop my-nginx
# Start a stopped container
docker start my-nginx
# Remove a container
docker rm my-nginx
# Remove an image
docker rmi nginx
Understanding Docker Architecture
Docker Client (CLI) ←→ Docker Daemon (dockerd)
↓
Docker Images
↓
Docker Containers
↓
Docker Registry
Working with Images
Finding Images
# Search for images
docker search nginx
docker search postgres
# Pull specific versions
docker pull node:18
docker pull node:18-alpine
docker pull postgres:14
# List local images
docker images
# Inspect an image
docker inspect nginx
Creating Your First Dockerfile
# Dockerfile for a simple Node.js app
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Change ownership
RUN chown -R nextjs:nodejs /app
USER nextjs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Start the application
CMD ["npm", "start"]
Building Images
# Build image from Dockerfile
docker build -t my-app .
# Build with specific tag
docker build -t my-app:v1.0 .
# Build with build arguments
docker build --build-arg NODE_ENV=production -t my-app .
# Build with no cache
docker build --no-cache -t my-app .
# Multi-stage build example
Multi-Stage Build Example
# Multi-stage build for React app
# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM nginx:alpine
# Copy built assets
COPY --from=builder /app/dist /usr/share/nginx/html
# Copy nginx config
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Container Management
Running Containers
# Basic run
docker run ubuntu echo "Hello World"
# Interactive mode
docker run -it ubuntu bash
# Detached mode with port mapping
docker run -d -p 8080:80 --name web-server nginx
# With environment variables
docker run -d -e NODE_ENV=production -e PORT=3000 my-app
# With volume mounts
docker run -d -v /host/data:/container/data my-app
# With resource limits
docker run -d --memory=512m --cpus=1 my-app
Container Networking
# List networks
docker network ls
# Create custom network
docker network create my-network
# Run container on custom network
docker run -d --network my-network --name app1 my-app
docker run -d --network my-network --name app2 my-app
# Connect running container to network
docker network connect my-network existing-container
# Inspect network
docker network inspect my-network
Volume Management
# List volumes
docker volume ls
# Create volume
docker volume create my-data
# Use named volume
docker run -d -v my-data:/data my-app
# Use bind mount
docker run -d -v /host/path:/container/path my-app
# Read-only mount
docker run -d -v /host/path:/container/path:ro my-app
# Inspect volume
docker volume inspect my-data
# Remove unused volumes
docker volume prune
Container Logs and Debugging
# View logs
docker logs my-container
# Follow logs
docker logs -f my-container
# Last 100 lines
docker logs --tail 100 my-container
# Logs since timestamp
docker logs --since 2025-01-01T10:00:00 my-container
# Execute command in running container
docker exec -it my-container bash
# Copy files to/from container
docker cp file.txt my-container:/path/
docker cp my-container:/path/file.txt ./
Docker Compose
Basic Compose File
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=development
volumes:
- .:/app
- /app/node_modules
depends_on:
- db
- redis
db:
image: postgres:14
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:alpine
ports:
- "6379:6379"
volumes:
postgres_data:
Compose Commands
# Start services
docker-compose up
# Start in background
docker-compose up -d
# Build and start
docker-compose up --build
# Stop services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# View logs
docker-compose logs
# Follow logs for specific service
docker-compose logs -f web
# Scale services
docker-compose up --scale web=3
# Execute command in service
docker-compose exec web bash
Advanced Compose Example
# Production-ready compose file
version: '3.8'
services:
# Load balancer
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- web
restart: unless-stopped
# Web application
web:
build:
context: .
dockerfile: Dockerfile.prod
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:password@db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
restart: unless-stopped
deploy:
replicas: 3
# Database
db:
image: postgres:14
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
volumes:
- postgres_data:/var/lib/postgresql/data
secrets:
- db_password
restart: unless-stopped
# Cache
redis:
image: redis:alpine
command: redis-server --appendonly yes
volumes:
- redis_data:/data
restart: unless-stopped
# Monitoring
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
volumes:
postgres_data:
redis_data:
prometheus_data:
secrets:
db_password:
file: ./secrets/db_password.txt
Production Best Practices
Security Best Practices
# Use specific, minimal base images
FROM node:18-alpine
# Create non-root user
RUN addgroup -g 1001 -S appgroup && \
adduser -S appuser -u 1001 -G appgroup
# Set working directory
WORKDIR /app
# Copy and install dependencies first (better caching)
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy application code
COPY --chown=appuser:appgroup . .
# Switch to non-root user
USER appuser
# Use COPY instead of ADD
# Scan for vulnerabilities
# docker scout cves my-image
# Use .dockerignore
.dockerignore Example
# .dockerignore
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.nyc_output
coverage
.DS_Store
.vscode
Dockerfile
docker-compose.yml
Multi-Environment Setup
# docker-compose.override.yml (development)
version: '3.8'
services:
web:
build:
target: development
volumes:
- .:/app
- /app/node_modules
environment:
- NODE_ENV=development
command: npm run dev
# docker-compose.prod.yml (production)
version: '3.8'
services:
web:
build:
target: production
restart: unless-stopped
environment:
- NODE_ENV=production
Health Checks and Monitoring
# Add health check to Dockerfile
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Health check in compose
services:
web:
image: my-app
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Resource Management
services:
web:
image: my-app
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
ulimits:
nofile:
soft: 65536
hard: 65536
Real-World Examples
Full-Stack MERN Application
# docker-compose.yml for MERN stack
version: '3.8'
services:
# MongoDB
mongo:
image: mongo:6
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- mongo_data:/data/db
ports:
- "27017:27017"
# Express.js Backend
backend:
build:
context: ./backend
dockerfile: Dockerfile
environment:
- NODE_ENV=development
- MONGODB_URI=mongodb://admin:password@mongo:27017/myapp?authSource=admin
- JWT_SECRET=your-secret-key
ports:
- "5000:5000"
depends_on:
- mongo
volumes:
- ./backend:/app
- /app/node_modules
# React Frontend
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
environment:
- REACT_APP_API_URL=http://localhost:5000
ports:
- "3000:3000"
depends_on:
- backend
volumes:
- ./frontend:/app
- /app/node_modules
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- frontend
- backend
volumes:
mongo_data:
Backend Dockerfile
# backend/Dockerfile
FROM node:18-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci
# Copy source code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S backend -u 1001
USER backend
EXPOSE 5000
CMD ["npm", "run", "dev"]
Frontend Dockerfile
# frontend/Dockerfile
FROM node:18-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci
# Copy source code
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Nginx Configuration
# nginx.conf
events {
worker_connections 1024;
}
http {
upstream backend {
server backend:5000;
}
upstream frontend {
server frontend:3000;
}
server {
listen 80;
# Frontend routes
location / {
proxy_pass http://frontend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# API routes
location /api/ {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# WebSocket support
location /socket.io/ {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Microservices Example
# Microservices architecture
version: '3.8'
services:
# API Gateway
gateway:
build: ./gateway
ports:
- "80:80"
environment:
- USER_SERVICE_URL=http://user-service:3001
- ORDER_SERVICE_URL=http://order-service:3002
- PRODUCT_SERVICE_URL=http://product-service:3003
# User Service
user-service:
build: ./services/user
environment:
- DATABASE_URL=postgresql://user:password@user-db:5432/users
depends_on:
- user-db
user-db:
image: postgres:14
environment:
POSTGRES_DB: users
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- user_data:/var/lib/postgresql/data
# Order Service
order-service:
build: ./services/order
environment:
- DATABASE_URL=postgresql://user:password@order-db:5432/orders
- REDIS_URL=redis://redis:6379
depends_on:
- order-db
- redis
order-db:
image: postgres:14
environment:
POSTGRES_DB: orders
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- order_data:/var/lib/postgresql/data
# Product Service
product-service:
build: ./services/product
environment:
- MONGODB_URI=mongodb://mongo:27017/products
depends_on:
- mongo
mongo:
image: mongo:6
volumes:
- mongo_data:/data/db
# Shared Redis Cache
redis:
image: redis:alpine
volumes:
- redis_data:/data
volumes:
user_data:
order_data:
mongo_data:
redis_data:
CI/CD Pipeline Example
# .github/workflows/docker.yml
name: Docker Build and Deploy
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=sha
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Deploy to production
run: |
echo "Deploying to production server..."
# Add your deployment commands here
Docker Optimization Techniques
Image Size Optimization
# Before: Large image
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]
# After: Optimized image
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:18-alpine AS runner
WORKDIR /app
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
USER nextjs
CMD ["node", "dist/index.js"]
Caching Strategies
# Leverage Docker layer caching
FROM node:18-alpine
WORKDIR /app
# Copy package files first (changes less frequently)
COPY package*.json ./
# Install dependencies (cached if package files unchanged)
RUN npm ci --only=production
# Copy source code (changes more frequently)
COPY . .
# Build application
RUN npm run build
CMD ["npm", "start"]
Troubleshooting Common Issues
Container Won’t Start
# Check container logs
docker logs my-container
# Check container exit code
docker ps -a
# Run container interactively for debugging
docker run -it my-image bash
# Override entrypoint for debugging
docker run -it --entrypoint bash my-image
Port Conflicts
# Check what's using port 8080
sudo lsof -i :8080
# Use different port mapping
docker run -p 8081:80 nginx
# Let Docker assign random port
docker run -P nginx
Permission Issues
# Create user with specific UID/GID
RUN groupadd -r appgroup -g 1001 && \
useradd -r -g appgroup -u 1001 appuser
# Fix ownership
COPY --chown=appuser:appgroup . /app
USER appuser
Network Connectivity
# Inspect container network
docker exec -it my-container cat /etc/hosts
# Test connectivity between containers
docker exec -it container1 ping container2
# Check DNS resolution
docker exec -it my-container nslookup google.com
Performance Monitoring
Container Resource Usage
# Monitor container stats
docker stats
# Get specific container stats
docker stats my-container
# Export stats to file
docker stats --no-stream > container-stats.log
Health Monitoring Setup
# docker-compose.yml with monitoring
services:
app:
image: my-app
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
# Prometheus monitoring
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
# Grafana dashboards
grafana:
image: grafana/grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
Production Deployment Strategies
Blue-Green Deployment
# Current production (blue)
docker-compose -f docker-compose.blue.yml up -d
# Deploy new version (green)
docker-compose -f docker-compose.green.yml up -d
# Switch traffic to green
# Update load balancer configuration
# Stop blue environment
docker-compose -f docker-compose.blue.yml down
Rolling Updates
# Update service with zero downtime
docker service update --image my-app:v2 my-service
# Monitor rollout
docker service ps my-service
Container Orchestration
# Docker Swarm example
version: '3.8'
services:
web:
image: my-app:latest
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
restart_policy:
condition: on-failure
max_attempts: 3
ports:
- "80:3000"
Conclusion
Docker has become an essential tool for modern software development and deployment. This comprehensive guide covered:
- Fundamentals: Understanding containers and their benefits
- Practical Skills: Building images, managing containers, and orchestration
- Production Best Practices: Security, monitoring, and deployment strategies
- Real-World Examples: Complete application setups and CI/CD integration
Key Takeaways
- Start Simple: Begin with basic containers and gradually add complexity
- Security First: Always use non-root users and minimal base images
- Optimize Images: Use multi-stage builds and layer caching
- Monitor Everything: Implement health checks and resource monitoring
- Automate Deployment: Use CI/CD pipelines for consistent deployments
Next Steps
- Practice with the examples in this guide
- Explore Kubernetes for large-scale orchestration
- Learn about container security scanning
- Implement monitoring and logging solutions
- Experiment with advanced networking and storage options
Start containerizing your applications today and join the millions of developers who rely on Docker for modern software delivery!
Additional Resources
- Docker Official Documentation
- Docker Hub Registry
- Docker Best Practices
- Kubernetes Documentation
- Docker Security Guidelines
Comments
Comments
Comments are not currently enabled. You can enable them by configuring Disqus in your site settings.