Devops All Levels

Docker Interview Questions

April 7, 2026 11 min read DB

🟒 Basic

Docker is an open-source platform that enables developers to build, package, ship, and run applications inside containers. Containers are lightweight, portable, and isolated environments that include everything needed to run an application β€” code, runtime, libraries, and dependencies.

Why it’s used:

  • Eliminates “works on my machine” issues
  • Faster deployments and consistent environments
  • Efficient resource usage compared to VMs
  • Easy scaling and orchestration
ImageContainer
DefinitionRead-only blueprint/templateRunning instance of an image
StateStatic (immutable)Dynamic (can be started/stopped/deleted)
StorageStored on diskLives in memory + writable layer
# Pull an image
docker pull nginx

# Run a container from the image
docker run -d --name my-nginx nginx

# List running containers
docker ps

# List images
docker images

A Dockerfile is a plain-text script containing instructions that Docker uses to build an image automatically.

Common instructions:

FROM node:18-alpine          # Base image
WORKDIR /app                 # Set working directory
COPY package*.json ./        # Copy files into image
RUN npm install              # Execute command during build
COPY . .                     # Copy remaining source code
EXPOSE 3000                  # Document which port the app uses
ENV NODE_ENV=production      # Set environment variable
CMD ["node", "server.js"]    # Default command to run

Build an image from a Dockerfile, then run a container from it:

# Build image (tag it with -t)
docker build -t my-app:1.0 .

# Run a container
docker run -d \
  --name my-app-container \
  -p 8080:3000 \
  my-app:1.0

# -d    = detached (background)
# -p    = host_port:container_port
# --name = assign a name

Docker Hub is the default public registry for Docker images. It hosts official images (like nginx, postgres, node) and allows users to publish their own images.

# Login to Docker Hub
docker login

# Tag your image for Docker Hub
docker tag my-app:1.0 yourusername/my-app:1.0

# Push to Docker Hub
docker push yourusername/my-app:1.0

# Pull someone else's image
docker pull yourusername/my-app:1.0
# --- Containers ---
docker ps                     # List running containers
docker ps -a                  # List all containers (including stopped)
docker stop <name|id>         # Gracefully stop a container
docker kill <name|id>         # Force stop a container
docker rm <name|id>           # Remove a stopped container
docker rm -f <name|id>        # Force remove (even if running)

# --- Images ---
docker images                 # List all images
docker rmi <image_id>         # Remove an image
docker image prune            # Remove all unused images

# --- Cleanup everything ---
docker system prune -a        # Remove all unused containers, images, networks

Both define the command that runs when a container starts, but they behave differently:

CMDENTRYPOINT
PurposeDefault arguments (can be overridden)Fixed executable (cannot be overridden easily)
Overridedocker run image <new-cmd> replaces itdocker run image arg appends to it
# CMD - easily overridden
CMD ["node", "server.js"]

# ENTRYPOINT - fixed executable
ENTRYPOINT ["node"]
CMD ["server.js"]   # default arg, can be overridden

# Run: docker run my-app debug.js
# β†’ executes: node debug.js

Best practice: Use ENTRYPOINT for the main executable and CMD for default arguments.


🟑 Intermediate

Docker Compose is a tool for defining and running multi-container applications using a single YAML file (docker-compose.yml). It’s ideal for local development and testing.

version: "3.9"

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/mydb
    depends_on:
      - db

  db:
    image: postgres:15
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: mydb
    volumes:
      - pg_data:/var/lib/postgresql/data

volumes:
  pg_data:
docker compose up -d       # Start all services
docker compose down        # Stop and remove containers
docker compose logs -f     # Follow logs
docker compose ps          # Status of services

Both are mechanisms for persisting data outside a container’s lifecycle, but they differ in where data is stored and who manages it.

VolumeBind Mount
Managed byDockerHost OS
Path/var/lib/docker/volumes/Any path on host
PortabilityHighLow
Use caseProduction, databasesLocal dev, live reload
# --- Volumes ---
docker volume create my_data
docker run -v my_data:/app/data my-app

# --- Bind Mounts ---
docker run -v $(pwd)/src:/app/src my-app
# or with --mount flag (explicit)
docker run --mount type=bind,source=$(pwd)/src,target=/app/src my-app

Tip: Prefer volumes in production; use bind mounts in development for live code reload.

Docker provides several network drivers:

DriverDescriptionUse Case
bridgeDefault isolated network on a single hostMost containers
hostContainer shares host’s network stackPerformance-critical apps
noneNo networkingFully isolated tasks
overlayMulti-host networkingDocker Swarm / Kubernetes
# Create a custom bridge network
docker network create my-network

# Connect containers to the same network
docker run -d --network my-network --name api my-api
docker run -d --network my-network --name db postgres

# Containers on the same custom network can resolve each other by name
# Inside 'api' container: ping db  β†’ works!

# List networks
docker network ls

# Inspect a network
docker network inspect my-network

Multi-stage builds use multiple FROM statements in a single Dockerfile, allowing you to copy only the artifacts you need from earlier stages. This drastically reduces final image size.

# --- Stage 1: Build ---
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build          # Produces /app/dist

# --- Stage 2: Production ---
FROM node:18-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist   # Only copy build output
EXPOSE 3000
CMD ["node", "dist/server.js"]

Result: The final image contains no dev dependencies, source code, or build tools β€” only what’s needed to run.

docker build --target production -t my-app:prod .

There are multiple ways to inject environment variables:

# 1. Inline with -e
docker run -e NODE_ENV=production -e PORT=3000 my-app

# 2. From a .env file
docker run --env-file .env my-app

# 3. In docker-compose.yml
# docker-compose.yml
services:
  app:
    image: my-app
    environment:
      NODE_ENV: production
      PORT: 3000
    env_file:
      - .env.production
# 4. Baked into the image (not recommended for secrets)
ENV NODE_ENV=production

⚠️ Never bake secrets (passwords, API keys) into images. Use .env files, Docker secrets, or a secrets manager like Vault.

Both copy files into the image, but ADD has extra functionality:

FeatureCOPYADD
Copy local filesβœ…βœ…
Auto-extract .tar archivesβŒβœ…
Fetch remote URLsβŒβœ…
Recommended forGeneral useArchive extraction only
# Preferred - explicit and predictable
COPY ./src /app/src
COPY package.json /app/

# Use ADD only when you need tar extraction
ADD app.tar.gz /app/

# Avoid ADD for URLs - use curl/wget in RUN instead
RUN curl -o /tmp/file.zip https://example.com/file.zip

Best practice: Always prefer COPY unless you specifically need ADD’s extra features.

# Get a shell inside a running container
docker exec -it <container> /bin/sh     # Alpine/minimal images
docker exec -it <container> /bin/bash   # Debian/Ubuntu images

# View real-time logs
docker logs -f <container>
docker logs --tail 100 <container>

# Inspect container config, network, mounts
docker inspect <container>

# Resource usage (CPU, memory, I/O)
docker stats <container>

# Copy files out of a container
docker cp <container>:/app/logs/error.log ./error.log

# View processes inside container
docker top <container>

# View filesystem changes since container started
docker diff <container>

πŸ”΄ Advanced

Docker builds images in layers β€” each instruction in a Dockerfile creates one layer. Layers are cached and reused if the instruction and its context haven’t changed.

Cache invalidation rule: Once a layer changes, all subsequent layers are rebuilt.

# ❌ BAD - COPY . . early invalidates cache on every code change
FROM node:18-alpine
WORKDIR /app
COPY . .                    # Invalidated on ANY file change
RUN npm install             # Reinstalls everything each time!

# βœ… GOOD - Copy dependency files first, leverage caching
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./       # Only changes when deps change
RUN npm install             # Cached until package.json changes
COPY . .                    # App code changes don't bust npm cache
RUN npm run build

Other optimization tips:

  • Combine RUN commands with && to reduce layers
  • Use .dockerignore to exclude node_modules, .git, build artifacts
  • Order instructions from least-changing to most-changing
  • Use --mount=type=cache (BuildKit) for package manager caches
# BuildKit cache mount for faster builds
RUN --mount=type=cache,target=/root/.npm \
    npm ci

Both are container orchestration platforms, but they differ in complexity and capability:

FeatureDocker SwarmKubernetes
Setup complexitySimpleComplex
Learning curveLowHigh
ScalabilityModerateEnterprise-grade
Auto-healingBasicAdvanced
EcosystemDocker-nativeMassive (CNCF)
Use caseSmall/medium teamsLarge-scale production
# Initialize a Swarm cluster
docker swarm init --advertise-addr <manager-ip>

# Deploy a stack (like docker-compose for Swarm)
docker stack deploy -c docker-compose.yml myapp

# Scale a service
docker service scale myapp_web=5

# List services
docker service ls

# Rolling update
docker service update --image my-app:2.0 myapp_web

When to choose what:

  • Swarm β†’ simpler apps, small teams, already using Docker Compose
  • Kubernetes β†’ complex microservices, multi-cloud, large engineering teams

Never store secrets as environment variables in images or plain docker-compose.yml. Use proper secrets management:

Option 1: Docker Secrets (Swarm mode)

# Create a secret
echo "super_secret_password" | docker secret create db_password -

# Use in a service
docker service create \
  --name myapp \
  --secret db_password \
  my-app:latest

Inside the container, secrets are available at /run/secrets/db_password.

Option 2: Docker Compose secrets (for dev)

services:
  app:
    image: my-app
    secrets:
      - db_password

secrets:
  db_password:
    file: ./secrets/db_password.txt  # Never commit this file!

Option 3: External secrets managers (production)

# HashiCorp Vault, AWS Secrets Manager, Azure Key Vault
# Inject at runtime via init containers or sidecar patterns

Best practices:

  • Add secret files to .gitignore
  • Rotate secrets regularly
  • Use least-privilege access
  • Audit secret access in production
# 1. Use minimal base images
FROM alpine:3.19                     # Small attack surface
FROM gcr.io/distroless/nodejs:18     # Distroless (no shell!)

# 2. Run as non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

# 3. Drop Linux capabilities
# In docker-compose.yml:
# cap_drop: [ALL]
# cap_add: [NET_BIND_SERVICE]

# 4. Read-only filesystem
# docker run --read-only --tmpfs /tmp my-app

# 5. Avoid storing secrets in ENV or layers
# Use Docker secrets or external vaults

# 6. Pin base image versions
FROM node:18.19.0-alpine3.19         # Pinned, not :latest
# Scan image for vulnerabilities
docker scout cves my-app:latest
# or
trivy image my-app:latest

# Run with security options
docker run \
  --read-only \
  --no-new-privileges \
  --cap-drop ALL \
  --security-opt seccomp=default \
  my-app:latest

Docker is not a VM β€” it uses Linux kernel features to create isolated processes:

Namespaces β€” provide isolation of system resources:

NamespaceIsolates
pidProcess IDs (container can’t see host processes)
netNetwork interfaces, IP tables, ports
mntFilesystem mount points
utsHostname and domain name
ipcInter-process communication
userUser and group IDs

Control Groups (cgroups) β€” limit and account for resource usage:

# Limit container to 512MB RAM and 1 CPU
docker run \
  --memory="512m" \
  --memory-swap="512m" \
  --cpus="1.0" \
  my-app

# View cgroup info for a container
cat /sys/fs/cgroup/memory/docker/<container-id>/memory.limit_in_bytes

Union Filesystem (OverlayFS) β€” layers images efficiently:

  • Each Dockerfile instruction creates a new read-only layer
  • Running container gets a thin writable layer on top
  • Layers are shared between containers using the same image β†’ saves disk space
# See the layers of an image
docker history my-app:latest

# Inspect overlay filesystem
docker inspect my-app | grep -A5 GraphDriver

Zero-downtime deployments require a strategy to transition traffic from old containers to new ones without service interruption.

Strategy 1: Docker Swarm rolling update

docker service update \
  --image my-app:2.0 \
  --update-parallelism 1 \
  --update-delay 10s \
  --update-failure-action rollback \
  myapp_web

Strategy 2: Blue-Green with Nginx + Compose

# docker-compose.blue-green.yml
services:
  app-blue:
    image: my-app:1.0
    networks: [proxy]

  app-green:
    image: my-app:2.0
    networks: [proxy]

  nginx:
    image: nginx
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
    networks: [proxy]
# 1. Start green alongside blue
docker compose up -d app-green

# 2. Health check green
curl http://localhost/health

# 3. Switch Nginx upstream to green (update config + reload)
docker exec nginx nginx -s reload

# 4. Remove blue after confirming green is stable
docker compose stop app-blue

Strategy 3: Kubernetes (Deployment rollout)

kubectl set image deployment/my-app my-app=my-app:2.0
kubectl rollout status deployment/my-app
kubectl rollout undo deployment/my-app   # Rollback if needed

Found an error or want to suggest a topic?

Help us improve! Submit feedback, report mistakes, or request new tutorials via our Google Form.

Open Google Form