From zero to your first deployed Pod.
Before we can run any Pods, we need a place to run them. A cluster is simply a set of machines (nodes) running Kubernetes.
A single node setup works but is not ideal. You don't want your control plane to run containers/act as a worker.
The master node (control pane) exposes an API that a developer can use to start pods.
docker ps to find where the control pane is running{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}Kubernetes API server does authentication checks and prevents you from getting in.
All of your authorization credentials are stored by kind in ~/.kube/config
kubectl is a command-line tool for interacting with Kubernetes clusters. It provides a way to communicate with the Kubernetes API server and manage Kubernetes resources.
kubectl get nodes
kubectl get pods• Under the hood, this is just making HTTP requests to the API server with your credentials present in ~/.kube/config
If you want to see the exact HTTP request that goes out to the API server, you can add --v=8 flag
kubectl get nodes --v=8We have created a cluster of 3 nodes
How can we deploy a single container from an image inside a pod ?
Let's try to start this image locally - https://hub.docker.com/_/nginx
docker run -p 3005:80 nginxIf you see this page, the nginx web server is successfully installed and working. Further configuration is required.
Crucial Concept: In Kubernetes, we don't start a "container" directly. We start a Pod, and inside it, the Docker container (like our Nginx one) runs. The Pod is the smallest deployable unit we can create and manage in k8s.
Data Persistence: Similar to Docker containers, if we run a database (like MongoDB) inside a Pod and the Pod stops or is deleted, all data inside it is lost. In Docker, we use volumes to mount and preserve data. Kubernetes has its own robust solutions for handling persistent data, which we will learn about later.
• Start a pod
kubectl run nginx-pod --image=nginx --port=80$ At the place of --image=nginx we can add our docker image name also.
• Check the status of the pod
kubectl get pods• Check the logs
kubectl logs nginx-pod• Describe the pod to see more details
You can stop and remove a pod using the delete command.
kubectl delete pod nginx-podBy default, pods are isolated and only accessible within the Kubernetes cluster. To visit the pod from your local machine, you can use port-forwarding to map a local port to the pod's exposed port (e.g., port 80 for Nginx).
kubectl port-forward pod/nginx-pod 3005:80After running this command, you can visit localhost:3005 in your browser.
The limit depends entirely on the physical resources (CPU, Memory) of your nodes and the limits configured in your cluster. Kubernetes will continue to schedule new pods on available worker nodes until the cluster runs out of allocatable resources. For a standard local setup, you can typically run dozens of lightweight pods without running into issues.
A manifest defines the desired state for Kubernetes resources, such as Pods, Deployments, Services, etc., in a declarative manner.
Note: We previously ran the raw command kubectl run nginx... to start the pod. This is an imperative approach which is okay for testing and learning, but it is not the standard production-grade approach. In production, we always use declarative manifests to define and apply our resources.
kubectl run nginx --image=nginx --port=80apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80kubectl apply -f manifest.ymlTo completely remove the cluster and all its resources, you can use the Kind CLI to delete it by specifying its name.
kind delete cluster --name locallocal cluster.Can we edit an existing Kind cluster dynamically? No. Tools like kind are designed for immutable infrastructure. You cannot easily attach a new worker node to an already running Kind cluster from the CLI.
If you need more worker nodes or a different topology, the standard declarative workflow is to:
clusters.yml file to add more - role: worker entries.Our Pod is now running. But what happens if the application inside the Pod crashes? Or what if the worker node hosting our Pod suddenly goes offline?
If we just create a single, standalone Pod (like we just did), Kubernetes will not automatically restart or recreate it if it fails. The Pod will simply die, and our application will be down.
To achieve true production-grade reliability and high availability, we need a way to tell Kubernetes: "Please ensure that there is ALWAYS exactly 1 (or more) instances of my Pod running at all times."
This leads us to our next Kubernetes construct:
ReplicaSet