>_
EngineeringNotes
Back to Docker Modules

Docker Fundamentals

The core concepts, history, architecture, and basic commands.

What are Containers?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

Virtual Machines (VMs)

  • Heavyweight
  • Includes full OS (Guest OS)
  • Slow boot time (minutes)
  • High resource usage

Docker Containers

  • Lightweight
  • Shares Host OS Kernel
  • Fast boot time (milliseconds)
  • Low resource usage

History of Docker

Docker started as an internal project at dotCloud, a PaaS company, by Solomon Hykes and his team. It launched as an open-source project in 2013.

  • YC Backed: Docker was a YC (Y Combinator) backed company (started ~2014).
  • The Vision: They envisioned a world where containers would become mainstream and the standard unit of deployment.
  • Today: That vision is reality. Most projects on GitHub have a Dockerfile.
"Build once, Run anywhere" became the mantra that changed DevOps forever.

Why Docker?

1. Isolation

Run multiple apps on the same machine without them fighting over dependencies (e.g., Python 3.8 vs 3.10).

2. Local Setup

Start databases (Mongo, Postgres, Redis) in seconds without installing them on your actual OS.

3. K8s Ready

Mastering Docker is the first step to mastering Kubernetes and modern Container Orchestration.

Installation Guide

Windows

  • Download Docker Desktop for Windows.
  • Install and ensure WSL 2 (Windows Subsystem for Linux) is selected.
  • Restart your machine.
Download for Windows

Mac

  • Choose Apple Silicon (M1/M2/M3) or Intel chip.
  • Drag Docker icon to Applications folder.
  • Open Docker Desktop to start the engine.
Download for Mac

Ubuntu / Linux

# 1. Update
sudo apt update
# 2. Install
sudo apt install docker.io
View Full Guide

5 Steps - Inside Docker

Client

Docker CLI

"The Voice"

docker run ...
REST API / Socket
Request
Host / Daemon

Docker Engine

"The Brain"

Pull Image
Remote

Registry

Docker Hub

1Docker CLI

The command line tool (docker) you use. It sends requests to the Docker Engine.

2Docker Engine

The background process (Daemon) that does the actual work. It builds, runs, and manages your containers. It creates the isolated environment.

3Docker Registry

Ideally works like GitHub but for Images instead of source code.

  • Docker Hub: The default public registry.
  • ECR/GCR: Private cloud registries.

🤔How is it different from GitHub?

GitHub

Source Control

  • Stores Code (Text).
  • "The Recipe" 📜
  • git push
Registry

Artifact Storage

  • Stores Images (Binaries).
  • "The Meal" 🍱
  • docker push

docker run mongoWhat actually happens?

When you run this command, the Docker Engine first checks if you have the mongo image locally.

If Missing:The Engine automatically goes to the Registry (Docker Hub), downloads (pulls) the image, and then runs it.
If Found:It skips the download and starts the container immediately.
Alternative: You can separate these steps by running docker pull mongo first to just download it, and then docker run mongo later.

💡 Did you know?

The CLI talks to the Engine via a REST API. You can actually manage Docker using curl or Postman!

Images vs Containers

Docker Image

A Docker image is a lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.

💡 Mental Model

"Your codebase on GitHub"

Docker Container

A container is a running instance of an image. It encapsulates the application or service and its dependencies, running in an isolated environment.

💡 Mental Model

"When you run node index.js on your machine"

The Setup Nightmare: Manual vs Docker

❌ The Manual Way

Imagine joining a new Open Source project or team. You open the README.md and see:

  • "Install Node.js v14" (But you have v18 installed...)
  • "Install MongoDB & PostgreSQL" (Conflicts with your local versions)
  • "Configure Environment Variables" (Hours of debugging)
  • "Run migration scripts manually"
Result: 4 hours later, you still haven't run the app.

✅ The Docker Way

With Docker, the entire environment (Node, Mongo, Postgres, Configs) is described in code.

You clone the repo and run one single command:

docker-compose up
Result: App runs in 3 minutes. Coffee time. ☕

The "It Works on My Machine" Incident

The Incident: A developer builds a feature. It runs perfectly on their MacBook. They push it to production.

🚨 Production CRASHES immediately.

Developer:
"But it works on my machine!"
Manager:
"We can't ship your machine to the data center."
Docker:
"Actually... yes we can."

The Real Magic: Docker containers are essentially "shipping the machine." The exact OS, libraries, and code that ran on your laptop are packaged up and sent to the server.

Cross-OS Consistency: docker run commands work exactly the same on Windows, Mac, and Linux. No more OS-specific installation guides!

Port Mapping

By default, containers are isolated. If you run docker run mongo, the database starts on port 27017 inside the container, but it is completely inaccessible from your laptop (localhost).

🛑

The "Closed Door" Problem: The container listens on port 27017, but the "door" to your host machine is closed. You cannot connect to it.

Visualizing Port Mapping

Host Machine
localhost:27017
27017
27017
Mongo
Container A
localhost:27018
27018
27017
Mongo
Container B

The Solution: -p Flag

To access the container, you must map a port on your machine (Host) to a port inside the container.

# Syntax: docker run -p host_port:container_port image
docker run -p 27017:27017 mongo
  • Host Port (Left): The port on YOUR laptop. You choose this.
  • Container Port (Right): The port the app listens on (fixed by the image).

💡 Why is this powerful?

You can run multiple versions of the same app simultaneously without port conflicts!

Notice in the diagram above: Both containers are running standard Mongo on port 27017 internally. But to your machine, one is on 27017 and the other is on 27018. No conflict!

Basic Docker Commands

Here are the most common commands you'll use every day:
docker run hello-worldDownloads a test image and runs it in a container.
Click to copy
docker psLists all correctly running containers.
Click to copy
docker ps -aLists ALL containers (including stopped ones).
Click to copy
docker logs <container_id>View the logs (stdout) of a container. Vital for debugging!
Click to copy
docker exec -it <container_id> shGo INSIDE a running container (open a shell).
Click to copy
docker imagesLists all images downloaded on your machine.
Click to copy
docker build -t <name> .Builds a new image from exactly the Dockerfile in the current directory.
Click to copy
docker start <container_id>Starts a stopped container. Differs from 'run' which creates a NEW container.
Click to copy
docker stop <container_id>Stops a running container gracefully (SIGTERM).
Click to copy
docker kill <container_id>Kills a container immediately (SIGKILL). Use only if stuck.
Click to copy
docker rmi <image_id>Removes (deletes) an image from your disk to save space.
Click to copy
docker rm <container_id>Removes (deletes) a stopped container.
Click to copy
docker system pruneNuclear option: Cleans up ALL unused containers, networks, and dangling images.
Click to copy

🕵️‍♂️Deep Dive: The Magic of `exec`

Often, running a container isn't enough. You need to see what's happening inside. Maybe you need to verify if a file was created or check database records directly.

The Command

docker exec -it <container_id> sh

-i (Interactive): Keep STDIN open even if not attached.

-t (tty): Allocate a pseudo-TTY (makes it feel like a real terminal).

Use Case: Debugging Postgres
~ docker run -d --name my-postgres -e POSTGRES_PASSWORD=secret postgres
# Container started... now let's go inside!
~ docker exec -it my-postgres bash
root@my-postgres:/# // We are now INSIDE the container!
root@my-postgres:/# psql -U postgres
postgres=# SELECT * FROM users;
(0 rows)

🚨Common Mistakes & Pitfalls

1. Port Conflicts

Error: Bind for 0.0.0.0:3000 failed: port is already allocated

Why: You likely have another service (like a local Node app) running on port 3000.

Fix: Change the HOST port: -p 3001:3000

2. Name Conflicts

Conflict. The container name "/mongo" is already in use

Why: You stopped a container but didn't remove it. It still exists in docker ps -a.

Fix: docker rm mongo or use --rm flag when running.

3. Stop vs Kill

Do not use docker kill unless absolutely necessary.

  • docker stop:Sends SIGTERM. App shuts down, saves data, closes connections. (Good)
  • docker kill:Sends SIGKILL. App dies instantly. Data corruption possible. (Bad)

4. Dangling Images

If you build images often, you will see many named <none>. These take up space!

Clean them all up:

docker system prune