>_
EngineeringNotes
Back to Kubernetes

Services

Connecting networking for your Pods and ensuring applications are reachable.

1.

Services

🔌 The Networking Problem

Now our Pods are running via Deployments, but are we able to reach our application?

Simply put: No, we cannot.

Why? Because each Pod gets its own isolated internal IP address. Furthermore, Pods are ephemeral. If a Pod crashes, the Deployment replaces it with a brand new Pod, which gets a completely new IP address. You cannot safely configure a frontend application to talk to 10.1.2.3, because tomorrow that Pod might die and the new one might be at 10.1.2.4.

The Solution: Services

How do we fix this issue? A Service! It acts as a single, stable entry point (a static IP or DNS name) that sits in front of your fluid, ever-changing fleet of Pods. When traffic hits the Service, it automatically load-balances the request to one of the healthy Pods behind it.

In Kubernetes, a "Service" is an abstraction that defines a logical set of Pods and a policy by which to access them. Kubernetes Services provide a way to expose applications running on a set of Pods as network services. Here are the key points about Services in Kubernetes:

Key concepts

1. Pod Selector:

Services use labels to select the Pods they target. A label selector identifies a set of Pods based on their labels.

2. Service Types:

  • ClusterIP: Exposes the Service on an internal IP in the cluster. This is the default ServiceType. The Service is only accessible within the cluster.

  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You can contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

3. Endpoints:

These are automatically created and updated by Kubernetes when the Pods selected by a Service's selector change.

Create service.yml

yaml
yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 30007 # This port can be any valid port within the NodePort range (30000-32767)
  type: NodePort

The Local Cluster Problem (kind)

When using a local Kubernetes tool like kind (Kubernetes IN Docker), there's a catch. kind runs your Kubernetes nodes as Docker containers on your host machine (e.g., your Mac or Windows PC). Even though you just exposed NodePort 30007 on the Node, that Node is running inside a Docker container with its own internal isolated IP!

⚠️To physically reach the NodePort from your host machine's web browser (localhost), you must map your host machine's port to the kind Docker container's port at the time of creating the cluster.

30007
master
docker container
30007
worker
30007
worker
30007
nginx
+

Restart the cluster with extra ports exposed (create kind.yml)

yaml
yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30007
    hostPort: 30007
- role: worker
- role: worker

Recreate cluster, apply deployment and service

Terminal
bash
kind create cluster --config kind.yml
kubectl apply -f deployment.yml
kubectl apply -f service.yml

Visit localhost:30007

🔒 localhost:30007

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

🧠 Internal Routing: What Happens Internally?

🏗️ Example Setup

  • Pod (nginx): worker-2
  • Pod IP: 10.244.2.5
  • Container Port: 80
  • Service Type: NodePort
  • NodePort: 30007
  • ClusterIP: 10.96.15.20 (auto-created)

💻 Node IPs

  • master: 192.168.1.10
  • worker-1: 192.168.1.11
  • worker-2: 192.168.1.12
🌍 You hit: 192.168.1.11:30007
(That is worker-1, NOT worker-2)

🔁 The Step-by-Step Flow

💻
Client (Browser)
N1
worker-1:30007 (NodePort hit)
KP
kube-proxy (Running on worker-1 triggers iptables to forward traffic)
SV
Service (ClusterIP: 10.96.15.20)
EP
Endpoints → routes to 10.244.2.5:80 (Pod IP)
CNI
Cluster Network (CNI) (Flannel/Calico figures out the physical node)
N2
worker-2 (Packet crosses physical network)
📦
nginx container:80 (Response goes back the exact same path!)

1️⃣ NodePort Opens Port on ALL Nodes

Even if NO pod is running on a specific node, the port (e.g., 30007) is open on every single node in the cluster. Why? Because kube-proxy configures iptables rules globally on every node to intercept that port.

2️⃣ How does it know the Pod location?

The Service doesn't route traffic directly. Kubernetes creates a hidden Endpoints object (kubectl get endpoints) that actively tracks the exact IP addresses (e.g., 10.244.2.5:80) of all healthy Pods bearing the matching label.

3️⃣ How Does worker-1 Reach worker-2?

Thanks to the CNI Plugin (like Flannel, Calico). It creates a flat virtual network. Every Pod IP is reachable from every node, meaning worker-1 can directly send traffic to 10.244.2.5 without hitting external routers.

🎯 Why targetPort: 80?

The Service routes traffic to the container port, not the node. It simply looks up the pod IP and forwards the packet to targetPort: 80, which perfectly matches the containerPort: 80.

🧩 What if the Pod Moves?

If the nginx pod dies and scheduler puts it on worker-1 instead: the Pod IP changes ➔ Endpoints automatically update ➔ kube-proxy iptables rules update globally ➔ Traffic goes directly to the new location. Zero manual changes needed!

🏁 Final Mental Model

  • NodePort = Door open on every house (Node)
  • Service = Reception desk
  • Endpoints = List of room numbers
  • CNI = Road network between houses
  • Pod = The actual room

Even if you knock on the completely wrong house (worker-1), the reception desk immediately looks at their list and forwards you through the internal road network to the correct house (worker-2).

⚖️ Selectors & Round Robin

How does a Service actually know which Pods to send traffic to? It uses Labels and a Selector. Kubernetes doesn't care what container image is running inside the Pod; it only cares if the Pod's labels match the Service's selector.

master node
30007
worker node 1
30007
nginx
80
worker node 2
30007
httpd
80
Pod 1
➜ ~ cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx

spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
Pod 2
➜ ~ cat pod2.yml
apiVersion: v1
kind: Pod
metadata:
  name: httpd
  labels:
    app: nginx

spec:
  containers:
  - name: httpd
    image: httpd
    ports:
    - containerPort: 80
Service
➜ ~ cat service.yml
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx

  ports:
  - port: 80
    targetPort: 80
    nodePort: 30007
  type: NodePort

Notice the trick?

Both Pods have the exact same label app: nginx, even though the second pod is actually running an Apache httpd container.

Because the Service's selector matches that label, the Service will strictly load balance (distribute traffic in a round-robin fashion) between both of them! Half your traffic will see the Nginx welcome screen, and half will see the Apache welcome screen.

⚠️ The Problems with NodePort

NodePort is great for local development, but it is rarely used in production. Here is why:

🕸️

Ugly Domain & Ports

Users don't want to type http://my-api-backend:30007. They expect traffic on standard clean ports like 80 (HTTP) or 443 (HTTPS).

🛡️

Security Vulnerabilities

To use NodePort, users directly hit your worker node's IP address. Exposing your internal infrastructure IP addresses directly to the internet is a massive security risk, making your nodes vulnerable to Direct Denial of Service (DDoS) attacks.

2.

LoadBalancer Service

In Kubernetes, a LoadBalancer service type is a way to expose a service to external clients. When you create a Service of type LoadBalancer, Kubernetes will automatically provision an external load balancer from your cloud provider (e.g., AWS, Google Cloud, Azure) to route traffic to your Kubernetes service.

☁️

How it differs from NodePort

Unlike NodePort, the LoadBalancer doesn't lie inside the cluster. It is a totally independent service living outside your cluster architecture. It acts as the single secure entry point, shielding your worker node IP addresses from the public internet.

app.100xdevs.com => 100.11.22.13
External LB
master node
worker node 1
nginx
80
worker node 2
httpd
80

Create service-lb.yml

yaml
yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

Apply the service

Terminal
bash
kubectl apply -f service-lb.yml

View the assigned External IP

Once applied, your cloud provider will asynchronously provision the load balancer. You can check its status and retrieve the public URL using the get svc command:

Terminal
bash
kubectl get svc

Note: The EXTERNAL-IP column might say <pending> for a few minutes while the cloud provider allocates the hardware. Eventually, it will populate with an actual DNS name or public IP address.

💀 Crucial Warning: The Hidden Cost Trap!

One of the biggest mistakes beginners make is deleting their Kubernetes cluster without deleting the LoadBalancer service first.

Because the LoadBalancer is a totally independent piece of hardware living outside your cluster, tearing down the cluster does not automatically tear down the LoadBalancer in most cloud providers. It will remain active in your cloud account, quietly accumulating massive charges.

ALWAYS explicitly delete the service before tearing down your cluster:
Terminal
bash
kubectl delete svc nginx-service