Expert Hub – Kubernetes

Hub owner: GELBER MAHECHA - Last updated on: 3rd June 2021


Kubernetes is an open-source portable and extensible platform for managing containerized applications and is developed by Google in 2014. Its name originates from Greek, meaning helmsman or pilot.

As a product with many years of experience in production workloads, going back in time, we can understand how the technology evolution has influenced the many useful characteristics that make Kubernetes what it is today.

  • Traditional deployment era: The applications directly ran on physical servers, and therefore, there was no way to limit the resources to each of them. It was entirely possible that one or more applications took up most of the resources, affecting the system’s performance. The solution was then to add extra servers to balance the workload among them, resulting in difficulties and expensive deployments.
  • Virtualized deployment era: Multiple virtual machines (VMs) executed over the same physical server, allowing fine control of the resources allocated to each virtual instance. It also brings benefit security as the isolated applications cannot freely (intentional or not) access information of another application.
  • Container deployment era: Similar to VMs, but with relaxed isolation properties. A container could be considered lightweight as it runs over the same operating system. Nevertheless, it has its own filesystem, CPU, memory and process space. And even more importantly, a container is built to be decoupled from the underlying infrastructure and is therefore portable across environments.

A production environment can be composed of multiple containers to facilitate the architecture, capacity, and resilience of the end product. Kubernetes provides a platform to run distributed systems in a resilient manner as it takes care of scaling and restoring the application in case of hardware or software failures.


A Kubernetes deployment is a cluster composed of worker nodes and plane control components. Worker nodes run the containerized applications, and every Kubernetes cluster must have at least one worker node. A worker node hosts Pods, which hold the containers, and are the unit of the workload in Kubernetes. The control plane manages the worker nodes and the Pods, and in production deployments, this plane could be distributed across multiple servers for fault tolerance and high availability.

  1. API Server: The front end for the Kubernetes control plane. It exposes the API, and it is designed to scale horizontally in order to accommodate the flow of messages of the cluster itself.
  2. Scheduler: Detects newly created Pods and makes sure to select a node to run on. The Scheduler uses several factors to select a worker node, including hardware/software and policy constraints, affinity specifications, data locality, inter-workload, interference, deadlines, etc.
  3. Persistence Storage (etcd): Conforms a highly available key-value store used by Kubernetes for all cluster data.
  4. Controller Manager: Comprised of controller processes compiled into a single binary. These types of controllers include node, replication, endpoints, service account, and token controllers.
  5. Cloud Controller: Embeds cloud-specific logic to let you link the cluster into the cloud provider’s API. It only runs on controllers specific to a cloud provider. Therefore, if you are running on your own premises, this component is not required.
  6. Kubelet: As part of a node component, it makes sure that containers are running in a Pod. Kubelet uses a set of specifications (described in PodSpecs) and ensures those Pods are running and are healthy. Any container should be linked to a Pod.
  7. Kube-proxy: Runs on each node, implementing a network proxy based on rules. These rules allow the network communications to the Pods from sessions inside or outside the cluster.

A key concept behind the Kubernetes operation is the container runtime, which is responsible for running the containers. Kubernetes supports several runtimes like Docker, containerd, CRI-O or any other implementation that sticks to the Kubernetes CRI (Container Runtime Interface).