>_
EngineeringNotes
Back to Kubernetes

Getting Started with Kubernetes

From zero to your first deployed Pod.

1. Creating a k8s Cluster

Before we can run any Pods, we need a place to run them. A cluster is simply a set of machines (nodes) running Kubernetes.

Using kind

🪟 Install on Windows (PowerShell)View Steps

1. Download & Move

powershell
powershell
curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.31.0/kind-windows-amd64
mkdir C:\kind
Move-Item .\kind-windows-amd64.exe C:\kind\kind.exe

2. Add to Path

  • Run sysdm.cpl → Advanced → Environment Variables
  • System variables → Path → Edit → New → C:\kind

3. Verify

Terminal
bash
kind --version

Single node setup

  • • Create a 1 node cluster

    kind create cluster --name local
  • • Check the docker containers you have running

    docker ps
  • • You will notice a single container running (control-plane)

  • • Delete the cluster

    kind delete cluster -n local

Multi node setup

  • • Create a clusters.yml file

    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    nodes:
    - role: control-plane
    - role: worker
    - role: worker
  • • Create the node setup

    kind create cluster --config clusters.yml --name local
  • • Check docker containers

    docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    cd04... kindest/node... "/usr/local/bin/entr..." 2 min ago Up 2 min 127.0.0.1:5433 local-control-plane
    43f7... kindest/node... "/usr/local/bin/entr..." 2 min ago Up 2 min local-worker2
    ee0b... kindest/node... "/usr/local/bin/entr..." 2 min ago Up 2 min local-worker

    local-control-plane is your master node and you can see the there is network configuration there by which you can communicate with the cluster via master node.

Now you have a node cluster running locally

💡

A single node setup works but is not ideal. You don't want your control plane to run containers/act as a worker.

2. Kubernetes API

The master node (control pane) exposes an API that a developer can use to start pods.

Try the API

  • Run docker ps to find where the control pane is running
    Terminal
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
    a3e949b85e91 kindest/node:v1.30.0 "/usr/local/bin/entr..." 6 minutes ago Up 6 minutes 127.0.0.1:50949->6443/tcp
  • Try hitting various endpoints on the API server -
    https://127.0.0.1:50949/api/v1/namespaces/default/pods
    Not Secure|https://127.0.0.1:50949/api/v1/namespaces/default/pods
    {
      "kind": "Status",
      "apiVersion": "v1",
      "metadata": {},
      "status": "Failure",
      "message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
      "reason": "Forbidden",
      "details": {
        "kind": "pods"
      },
      "code": 403
    }

Kubernetes API server does authentication checks and prevents you from getting in.

All of your authorization credentials are stored by kind in ~/.kube/config

← ultr cluster
← local kind cluster
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...
    server: https://127.0.0.1:6443
  name: ultr-cluster
- cluster:
    certificate-authority-data: ...
    server: https://127.0.0.1:50949
  name: kind-local
contexts:
...
current-context: kind-local
kind: Config
preferences:
users:
...

3. kubectl

kubectl is a command-line tool for interacting with Kubernetes clusters. It provides a way to communicate with the Kubernetes API server and manage Kubernetes resources.

Install kubectl

https://kubernetes.io/docs/tasks/tools/#kubectl

Ensure kubectl works fine

Terminal
bash
kubectl get nodes
kubectl get pods

• Under the hood, this is just making HTTP requests to the API server with your credentials present in ~/.kube/config

If you want to see the exact HTTP request that goes out to the API server, you can add --v=8 flag

Terminal
bash
kubectl get nodes --v=8

4. Creating a Pod

There were 5 jargons we learnt about

  1. Cluster
  2. Nodes
  3. Images
  4. Containers
  5. Pods

We have created a cluster of 3 nodes

How can we deploy a single container from an image inside a pod ?

Finding a good image

Let's try to start this image locally - https://hub.docker.com/_/nginx

Starting using docker

Terminal
bash
docker run -p 3005:80 nginx
Terminal Output
docker run -p 3005:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
...
Digest: sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
Status: Downloaded newer image for nginx:latest
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
...
2024/06/01 00:33:21 [notice] 1#1: start worker process 29
2024/06/01 00:33:21 [notice] 1#1: start worker process 30

Try visiting localhost:3005

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

Starting a pod using k8s

ℹ️

Crucial Concept: In Kubernetes, we don't start a "container" directly. We start a Pod, and inside it, the Docker container (like our Nginx one) runs. The Pod is the smallest deployable unit we can create and manage in k8s.

Data Persistence: Similar to Docker containers, if we run a database (like MongoDB) inside a Pod and the Pod stops or is deleted, all data inside it is lost. In Docker, we use volumes to mount and preserve data. Kubernetes has its own robust solutions for handling persistent data, which we will learn about later.

  • • Start a pod

    Terminal
    bash
    kubectl run nginx-pod --image=nginx --port=80

    $ At the place of --image=nginx we can add our docker image name also.

  • • Check the status of the pod

    Terminal
    bash
    kubectl get pods
  • • Check the logs

    Terminal
    bash
    kubectl logs nginx-pod
    /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty...
    ...
    2024/06/01 00:32:23 [notice] 1#1: start worker process 32
    2024/06/01 00:32:23 [notice] 1#1: start worker process 33
  • • Describe the pod to see more details

    kubectl describe pod nginx-pod
    Name: nginx-pod
    Namespace: default
    Priority: 0
    Node: local-worker/172.20.0.3 ← worker 1
    Start Time: ...
    Labels: run=nginx
    Status: Running
    IP: 10.244.1.2

What our system looks like right now

Cluster
control-plane
worker 1
pod
nginx container
worker 2

Answers to Common Questions

1. How can I stop a pod?

You can stop and remove a pod using the delete command.

Terminal
bash
kubectl delete pod nginx-pod

2. How can I visit the pod? Which port is it available on?

By default, pods are isolated and only accessible within the Kubernetes cluster. To visit the pod from your local machine, you can use port-forwarding to map a local port to the pod's exposed port (e.g., port 80 for Nginx).

Terminal
bash
kubectl port-forward pod/nginx-pod 3005:80

After running this command, you can visit localhost:3005 in your browser.

3. How many pods can I start?

The limit depends entirely on the physical resources (CPU, Memory) of your nodes and the limits configured in your cluster. Kubernetes will continue to schedule new pods on available worker nodes until the cluster runs out of allocatable resources. For a standard local setup, you can typically run dozens of lightweight pods without running into issues.

5. Kubernetes manifest

A manifest defines the desired state for Kubernetes resources, such as Pods, Deployments, Services, etc., in a declarative manner.

💡

Note: We previously ran the raw command kubectl run nginx... to start the pod. This is an imperative approach which is okay for testing and learning, but it is not the standard production-grade approach. In production, we always use declarative manifests to define and apply our resources.

Original command

Terminal
bash
kubectl run nginx --image=nginx --port=80

Manifest

yaml
yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

Breaking down the manifest

apiVersion:v1
When was `Pod` introduced as a k8s construct
kind:Pod
What we're starting
metadata:
name:nginx
name for the pod
spec:
containers:
- name:nginx
image:nginx
ports:
- containerPort:80
First container to start inside the pod
Port on which the pod will listen for incoming requests

Applying the manifest

Terminal
bash
kubectl apply -f manifest.yml

6. Managing the Cluster

Deleting the Cluster

To completely remove the cluster and all its resources, you can use the Kind CLI to delete it by specifying its name.

Terminal
bash
kind delete cluster --name local
  • This will stop and remove all Docker containers associated with the local cluster.
  • It frees up the resources allocated on your host machine.

Editing the Cluster (e.g., Adding Worker Nodes)

Can we edit an existing Kind cluster dynamically? No. Tools like kind are designed for immutable infrastructure. You cannot easily attach a new worker node to an already running Kind cluster from the CLI.

If you need more worker nodes or a different topology, the standard declarative workflow is to:

  1. Delete the existing cluster
    Tear down the current setup to free resources.
  2. Update your configuration
    Edit your clusters.yml file to add more - role: worker entries.
  3. Create a new cluster
    Recreate the cluster using the updated configuration file.

🤔 The Next Big Question: Reliability

Our Pod is now running. But what happens if the application inside the Pod crashes? Or what if the worker node hosting our Pod suddenly goes offline?

If we just create a single, standalone Pod (like we just did), Kubernetes will not automatically restart or recreate it if it fails. The Pod will simply die, and our application will be down.

To achieve true production-grade reliability and high availability, we need a way to tell Kubernetes: "Please ensure that there is ALWAYS exactly 1 (or more) instances of my Pod running at all times."

This leads us to our next Kubernetes construct:

ReplicaSet