Learn Kubernetes Concepts : Visualize It Like an Airport

In this post, we will learn Kubernetes (K8S) concepts, using analogy of an Airport. i.e. by thinking and visualizing Kubernetes cluster as an airport, we will master the core concepts of Kubernetes.

Note: I will be referring to Kubernetes as K8S

Core Components

Fundamentally, Kubernetes (K8s) consists of two core components: the Control Plane and the Worker Nodes.
The Control Plane, in simple terms, acts as the brain of the system—it manages the entire cluster, making decisions and directing operations.
The Worker Nodes, on the other hand, are like the muscles—they carry out the actual work by executing user applications, following the instructions from the Control Plane, and reporting back their status.

Analogy: Think of the Control Plane as an Air Traffic Control (ATC) unit, the central command of an airport. The Worker Nodes are like airplanes – responsible for the real work of flying. Just as pilots listen to and follow instructions from ATC to ensure safe and efficient flights, Worker Nodes follow directives from the Control Plane to execute tasks and maintain cluster operations.

Learn Kubernetes Concepts with
Illustration of an airport with airplanes representing Kubernetes worker nodes and an air traffic control tower symbolizing the control plane, highlighting the orchestration of flight operations similar to Kubernetes managing containerized applications.

Expanding on the Control Plane

The Control Plane is essentially a server running a collection of system services that serve as the brain of the Kubernetes cluster. It exposes the Kubernetes API, includes a scheduler to assign workloads, records the state of the cluster and applications in a persistent store, and implements features such as scheduling, auto-scaling, and zero-downtime rolling updates. In short, the Control Plane is the intelligence of the K8s cluster.

Components of the Control Plane:

The Control Plane is composed of the following key services:

  • API Server
  • Cluster Store
  • Controller Manager and Controllers
  • Scheduler
  • Cloud Controller Manager

Let’s look at each of these in more detail. Understanding them is essential to grasp how Kubernetes functions. Don’t worry if the purpose of each component isn’t immediately clear—we’ll revisit and expand on them as we go.

API Server

The API Server is the central communication hub of the Control Plane. Every operation or communication within the cluster goes through it. Whether it’s an internal component or an external user interacting with the system, all requests and responses flow through the API Server. It’s the front door to your Kubernetes cluster.

Cluster Store

The Cluster Store is the stateful part of the Control Plane—it persistently stores the cluster’s entire configuration and state. It’s typically backed by etcd, a distributed key-value store. This store acts as the single source of truth for the cluster. It’s strongly recommended to configure etcd in a highly available setup to ensure reliability and fault tolerance.

Controller Manager and Controllers

Controllers such as the Deployment Controller, StatefulSet Controller, and ReplicaSet Controller are responsible for maintaining the desired state of the cluster. They continuously watch the API Server for changes and act to reconcile the current state with the desired state.

The Controller Manager oversees all these controllers. It’s essentially the manager of the individual, specialized controllers—spawning them and monitoring their activity.

Scheduler

The Scheduler watches the API Server for newly created pods that don’t yet have a node assigned. It evaluates the health and capabilities of available Worker Nodes and assigns the pod to the most appropriate one. The Scheduler doesn’t run workloads itself—it simply makes intelligent placement decisions.

Cloud Controller Manager

When running a Kubernetes cluster on a supported public cloud (e.g., AWS, GCP, Azure), the Cloud Controller Manager integrates cloud-specific functionalities. It communicates with the cloud provider’s API to manage resources such as virtual machines, load balancers, and persistent storage.

Analogy: The Control Plane as an Air Traffic Control (ATC) Unit

After exploring these services, you can begin to see how the Control Plane resembles an Air Traffic Control (ATC) system at an airport:

  • Controllers are like specialized ATC staff. One may specialize in flight scheduling, another in communication, etc. And their Controller Manager is their supervisor – ensuring they all work in sync.
  • Scheduler is the flight coordinator – it carefully decides which plane will fly from one point to another, what route it will take, etc. based on safety measures and other procedures, but doesn’t fly the plane itself.
  • API Server is like the ATC tower – all communication flows through it. Just as pilots communicate only through ATC, all components in the cluster go through the API Server.
  • Cluster Store is like the arrival/departure board at the airport, showing the current status of all flights. This status is also stored and updated in a persistent database—just like etcd.

This analogy helps visualize how Kubernetes Control Plane orchestrates its components with the same precision and coordination as an airport.

Worker Nodes

Put simply, Worker Nodes are the physical servers, VMs or Containers where user applications actually run. They are the “workers” in a Kubernetes (K8s) cluster and are primarily responsible for:

  1. Watching the API Server for new work assignments
  2. Executing those assignments (i.e., running workloads)
  3. Reporting the status of the work back to the Control Plane via the API Server

Each Worker Node runs several critical services:

Kubelet

The kubelet is the main Kubernetes agent that runs on every node. When a node joins the cluster, the kubelet is installed and registers the node’s CPU, memory, and storage with the overall cluster. It continuously watches the API Server for new tasks (pods) to run.

The kubelet doesn’t decide what to run or where—it only manages the containers assigned to its node. If it’s unable to run a task, it simply informs the Control Plane, rather than trying to reassign it elsewhere.

Container Runtime

The kubelet relies on a container runtime to handle all container-related tasks—such as pulling container images, and starting or stopping containers. Kubernetes uses the Container Runtime Interface (CRI), which allows different third-party container runtimes (like containerd or CRI-O) to plug into the system.

Kube-proxy

The kube-proxy runs on every node and manages local networking for the cluster. It assigns each pod a unique IP address and sets up local routing rules using iptables or IPVS. This ensures network traffic within the cluster is properly routed and load-balanced.

Analogy: Worker Node as an Airplane

To visualize how a Worker Node operates, think of it like an airplane:

The node (server) is the airplane itself—it does the actual work of flying (i.e., running applications).

Kubelet is the pilot. Just as each airplane has a pilot who communicates with Air Traffic Control (ATC), each node has a kubelet that communicates with the Control Plane. When the pilot starts and tests the plane, they relay the plane related info and check in with ATC, just like the kubelet registers the node with the cluster.

Kube-proxy is comparable to the flight number and gate assignments. Each plane gets a unique flight number (e.g., UA 123 or AI 234) which determines where it parks and how it’s scheduled. Similarly, kube-proxy ensures that each pod has a unique network identity and is routed correctly for communication and services.

The container runtime is like the cabin crew and support staff. Just as trained flight staff handle passenger service, boarding, and cleaning, container runtimes handle container operations such as launching, stopping, and maintaining containers.

Kubernetes as an Application Orchestrator

Now that we understand the components of a Kubernetes cluster, it’s easier to see why Kubernetes (K8s) is called an application orchestrator.

But before we dive into what application orchestration means, let’s first understand the types of applications that typically run on Kubernetes. We’ll break it down step by step:

What is a Containerized Application?

Kubernetes orchestrates containerized applications—that is, applications that run inside containers. Historically, applications have run on physical servers, then virtual machines (VMs), and now, increasingly, in containers. While Kubernetes can technically manage workloads in all those environments, its most common and powerful use case is orchestrating containerized workloads.

What is a Cloud-Native Application?

A cloud-native application is designed to meet the dynamic demands of modern cloud environments—such as auto-scaling, self-healing, rolling updates, and instant rollbacks.
It’s not just about where the app runs (e.g., in the cloud), but how it’s built and how it behaves. A cloud-native app is resilient, adaptable, and architected to thrive in distributed, scalable systems.

What is a Microservices Application?

A microservices application is composed of many small, independent, and specialized services working together to form a complete application. For example, an e-commerce app might include a front-end service, a product catalog service, an order service, and a logging service.
Each microservice is self-contained, runs in its own container, and can be independently deployed, scaled, updated, or patched without affecting the others.

What Does “Orchestrator” Mean?

An orchestrator is a system that automates the deployment and management of applications. It doesn’t just deploy software—it continuously responds to system events and ensures everything stays healthy and responsive. It literally manages the application from start to end.

Typical orchestration activities include:

  1. Scaling up to meet increased demand
  2. Scaling down during low usage periods
  3. Self-healing by replacing failed containers
  4. Zero-downtime rolling updates and automatic rollbacks

In this way, Kubernetes manages containerized applications that are built to scale, heal themselves, and evolve with minimal disruption—meeting the modern standards of cloud-native infrastructure.

Analogy: Orchestration as Running an Airport

Think of orchestration like managing a busy airport. The goal is to keep everything running smoothly—even when things go wrong.

For instance:

  • When there’s a surge in passengers during holidays, more flights (or bigger planes) are deployed. This is similar to scaling up applications in Kubernetes.
  • When demand drops, fewer planes are scheduled – akin to scaling down.
  • If a plane fails a pre-flight check, another is swapped in immediately to avoid delays—just like self-healing in Kubernetes.

Orchestration is about adapting in real-time, keeping the system running without passengers (or users) noticing the complexity behind the scenes.

Kubernetes and Docker

Any discussion about Kubernetes (K8s) would be incomplete without mentioning Docker.

Docker and Kubernetes complement each other perfectly. Docker is responsible for starting and stopping containers, while Kubernetes performs higher-level orchestration, giving commands to Docker to start or stop containers as part of scaling, healing, and rolling updates.

However, the relationship evolved with the introduction of the Container Runtime Interface (CRI) by Kubernetes. CRI acts as an abstraction layer that allows Kubernetes to work with any container runtime, not just Docker. This added flexibility enables Kubernetes to integrate with various container runtimes developed by third parties.

As a result, Kubernetes has deprecated Docker as a runtime. Although Docker images continue to work seamlessly, support for Docker as a container runtime is being phased out. Instead, many Kubernetes clusters now use containerd as the default runtime—a lightweight, stripped-down version of Docker’s core container runtime functionality.

Thanks to Google for Kubernetes

Kubernetes was donated to the world by none other than Google. Long before Docker popularized containers, Google had extensive experience running containerized applications – managing services like Search and Gmail inside containers. To orchestrate these applications, Google had developed their own home grown proprietary systems such as Borg and Omega.

Building on this deep expertise, Google created Kubernetes and donated it to the Cloud Native Computing Foundation (CNCF) in 2014 as an open-source project. The two major benefits Kubernetes introduced were:

  1. It abstracted away the underlying infrastructure, whether it’s AWS, Azure, or on-premises hardware.
  2. It made it easy to move applications seamlessly across different cloud environments.

Additionally, Kubernetes is developed using the Go programming language, which contributes to its performance and scalability.

Visit the main Kubernetes site “kubernetes.io” to learn more.

Click Here to go through more such articles and blogs.

2 thoughts on “Learn Kubernetes Concepts : Visualize It Like an Airport”

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top