Kubernetes Pods and Deployments

Desired State of Apps in K8S

Kubernetes likes to manage application declaratively. This is a pattern where you describe what you want in a set of YAML files, post them to K8S and sit back while it happens. K8S watches it and makes sure it doesn’t stray from what you asked for.

You can pass in the YAML file using the k8s command like tool to post it to the API server as the desired state of the application, such as which image to use, how many replicas to run, which n/w ports to listen to and how to perform updates.

  1. Declare the desired state in YAML manifest file
  2. Post it to the API server
  3. K8s store it in the cluster store as desired state and implements it in the cluster
  4. A controller makes sure that current state does not vary from desired state

NOTE: One of the major advantage we get from K8S is that it can run on-prem in your data center or on any cloud provider today. Hence, giving us major benefit of not being tied to any cloud provider. So long as our app runs on K8S, we can lift and shift to any cloud or even back to on prem where we have K8S cluster. Thus, not being tied to any platform. It abstracts the difference in underlying platform and we can use the same manifest files across.

DNS service

Every K8S cluster has internal DNS service that is vital to service discovery.
Cluster’s DNS service has a static IP address that is hard-coded into every pod of the cluster. This ensures every container and Pod can locate it and use it for discovery. Service registration is also automatic. This means, apps don’t need to be coded with the intelligence to register with K8S service discovery.

Apps for kubernetes

  1. packaged as a container
  2. wrapped in a pod
  3. deployed via declarative YAML

containerized – write application in language of your choice and build docker image

pod is just a wrapper to let containerized app run on k8s

pods are deployed using high level controller, such as Deployment.

Pods and Containers

simplest model is to run a single container in a pod.

use cases for running multiple containers in a pod:
1. service meshes
2. web container supported by a helper container pulling updated content
3. Containers with a tightly coupled log scraper

Pod is a construct for running one or more containers.

Pods don’t run applications, applications always run inside containers. Pod is just a sandbox to run one or more containers.

multiple containers in a pod all share the same Pod environment. This includes network stack, volumes, IPC namespace, shared memory and more. this means, all containers in same pod will share same IP address. If they want to communicate with each other, then they can use localhost interface. Multi container pods are useful when you have tightly coupled containers.

consider using service mesh to secure traffic between pods and application services.

Pods are the unit of scaling. You don’t scale containers, instead you scale pods.

pods deployment is atomic operation. the entire pod either comes into service or it doesn’t and it fails. Also, a single pod is only on one node, one pod is not scheduled across multiple nodes.

If a pod goes down unexpectedly, a new pod comes up. K8S bring it up.

pods are immutable. You don’t change the config of a running pod. If you want to do that, then you need to replace it with new one.

Deployments, DaemonSets and StatefulSets are high level controllers used for deployment. They constantly watch the cluster and makes sure desired state matches the current state.

When the pods are scaled up or down, or rolled out or replaced, a new IP adddress is associated with them. Thus, services comes into play as it provides reliable networking. It provides reliable name and IP. It also load balances to the pods rendered behind it.

Service is an object in K8S API. They have a front end consisting of stable DNS name and IP address and port. Back end takes care of load balancing traffic across dynamic set of pods. As pods come and go, service automatically updates itself and continues to provide stable networking endpoint. It is a stable networking abstraction point that provides TCP and UDP load-balancing across a dynamic set of pods.

Since service operates at TCP and UDP layer, it can not provide application related intelligence, such as application layer host and path routing. For that we need Ingress which provides HTTP and also can do application layer host and path routing.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top