Connecting networking for your Pods and ensuring applications are reachable.
Now our Pods are running via Deployments, but are we able to reach our application?
Why? Because each Pod gets its own isolated internal IP address. Furthermore, Pods are ephemeral. If a Pod crashes, the Deployment replaces it with a brand new Pod, which gets a completely new IP address. You cannot safely configure a frontend application to talk to 10.1.2.3, because tomorrow that Pod might die and the new one might be at 10.1.2.4.
How do we fix this issue? A Service! It acts as a single, stable entry point (a static IP or DNS name) that sits in front of your fluid, ever-changing fleet of Pods. When traffic hits the Service, it automatically load-balances the request to one of the healthy Pods behind it.
In Kubernetes, a "Service" is an abstraction that defines a logical set of Pods and a policy by which to access them. Kubernetes Services provide a way to expose applications running on a set of Pods as network services. Here are the key points about Services in Kubernetes:
Services use labels to select the Pods they target. A label selector identifies a set of Pods based on their labels.
ClusterIP: Exposes the Service on an internal IP in the cluster. This is the default ServiceType. The Service is only accessible within the cluster.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You can contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
These are automatically created and updated by Kubernetes when the Pods selected by a Service's selector change.
service.ymlapiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30007 # This port can be any valid port within the NodePort range (30000-32767)
type: NodePortWhen using a local Kubernetes tool like kind (Kubernetes IN Docker), there's a catch. kind runs your Kubernetes nodes as Docker containers on your host machine (e.g., your Mac or Windows PC). Even though you just exposed NodePort 30007 on the Node, that Node is running inside a Docker container with its own internal isolated IP!
⚠️To physically reach the NodePort from your host machine's web browser (localhost), you must map your host machine's port to the kind Docker container's port at the time of creating the cluster.
kind.yml)kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30007
hostPort: 30007
- role: worker
- role: workerkind create cluster --config kind.yml
kubectl apply -f deployment.yml
kubectl apply -f service.ymllocalhost:30007Even if NO pod is running on a specific node, the port (e.g., 30007) is open on every single node in the cluster. Why? Because kube-proxy configures iptables rules globally on every node to intercept that port.
The Service doesn't route traffic directly. Kubernetes creates a hidden Endpoints object (kubectl get endpoints) that actively tracks the exact IP addresses (e.g., 10.244.2.5:80) of all healthy Pods bearing the matching label.
Thanks to the CNI Plugin (like Flannel, Calico). It creates a flat virtual network. Every Pod IP is reachable from every node, meaning worker-1 can directly send traffic to 10.244.2.5 without hitting external routers.
The Service routes traffic to the container port, not the node. It simply looks up the pod IP and forwards the packet to targetPort: 80, which perfectly matches the containerPort: 80.
If the nginx pod dies and scheduler puts it on worker-1 instead: the Pod IP changes ➔ Endpoints automatically update ➔ kube-proxy iptables rules update globally ➔ Traffic goes directly to the new location. Zero manual changes needed!
Even if you knock on the completely wrong house (worker-1), the reception desk immediately looks at their list and forwards you through the internal road network to the correct house (worker-2).
How does a Service actually know which Pods to send traffic to? It uses Labels and a Selector. Kubernetes doesn't care what container image is running inside the Pod; it only cares if the Pod's labels match the Service's selector.
Both Pods have the exact same label app: nginx, even though the second pod is actually running an Apache httpd container.
Because the Service's selector matches that label, the Service will strictly load balance (distribute traffic in a round-robin fashion) between both of them! Half your traffic will see the Nginx welcome screen, and half will see the Apache welcome screen.
NodePort is great for local development, but it is rarely used in production. Here is why:
Users don't want to type http://my-api-backend:30007. They expect traffic on standard clean ports like 80 (HTTP) or 443 (HTTPS).
To use NodePort, users directly hit your worker node's IP address. Exposing your internal infrastructure IP addresses directly to the internet is a massive security risk, making your nodes vulnerable to Direct Denial of Service (DDoS) attacks.
In Kubernetes, a LoadBalancer service type is a way to expose a service to external clients. When you create a Service of type LoadBalancer, Kubernetes will automatically provision an external load balancer from your cloud provider (e.g., AWS, Google Cloud, Azure) to route traffic to your Kubernetes service.
Unlike NodePort, the LoadBalancer doesn't lie inside the cluster. It is a totally independent service living outside your cluster architecture. It acts as the single secure entry point, shielding your worker node IP addresses from the public internet.
service-lb.ymlapiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancerkubectl apply -f service-lb.ymlOnce applied, your cloud provider will asynchronously provision the load balancer. You can check its status and retrieve the public URL using the get svc command:
kubectl get svcNote: The EXTERNAL-IP column might say <pending> for a few minutes while the cloud provider allocates the hardware. Eventually, it will populate with an actual DNS name or public IP address.
One of the biggest mistakes beginners make is deleting their Kubernetes cluster without deleting the LoadBalancer service first.
Because the LoadBalancer is a totally independent piece of hardware living outside your cluster, tearing down the cluster does not automatically tear down the LoadBalancer in most cloud providers. It will remain active in your cloud account, quietly accumulating massive charges.
kubectl delete svc nginx-service