Kubernetes (also known as K8s on public forums) is a powerful system that is developed by Google for the management of containerized applications in a clustered environment.
The idea behind Kubernetes was to build an open source system that is designed to run enterprise-class, cloud-enabled and web-scalable IT workloads. Google on the basis of their 15 plus years of experience in running containerized applications built Kubernetes. Google later donated this open source system to Cloud Native Computing Foundation.
For beginners, before you delve into the what, why and how of Kubernetes, you need to have prior knowledge of what containers are and how do they work? What are Dockers and managed VMs and how do they work? Since the basic aim of this article is to explain Kubernetes in detail, the main terminologies will be briefly explained before we jump onto the Kubernetes and its components. If you’re interested to learn more about Dockers, Virtual Machines, containers then you can check out terrific articles here, here, and here. Now for the shorter basic versions:
What are managed VMs?
A virtual machine app creates a virtualized environment—called, simply enough, a virtual machine—that behaves like a separate computer system, complete with virtual hardware devices. The VM runs as a process in a window on your current operating system.
Source of definition taken from How-to Geek
What are containers?
A container is a standardized unit of software. It is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment.
Source of definition taken from docker.com
What is Dockers?
Docker is an open source project to pack, ship and run any application as a lightweight container. The idea is to provide a comprehensive abstraction layer that allows developers to “containerize” or “package” any application and have it run on any infrastructure.
Source of definition taken from Scott’s Weblog
Now back to what is Kubernetes. Since you now know of some of the basic terminologies, let’s go back to the definition of the Kubernetes. It is basically a cluster management tool for Docker containers. Kubernetes aids in scheduling and deploying a large number of containers replicas onto a node cluster. Since it’s an open-source model, Kubernetes can run almost anywhere, and all the major players in public cloud providers offer easy ways to utilize this technology. Private clouds based on OpenStack or Mesos can also run on Kubernetes, and bare metal servers can be leveraged as worker nodes for it.
The Kubernetes Architecture
As with most distributed computing platforms, a Kubernetes cluster consists of at least one master and multiple compute nodes.
The master node is the one that is responsible for the management of the Kubernetes cluster. This is the main entry point of all administrative tasks. The master node, also known as the control plane, is the one that is managing the worker nodes, where the actual services are running.
A master node is made of the following components:
- API Server
API Server is the main management point of the entire cluster, as it allows a user to configure many of Kubernetes’ workloads and organizational units. The API server is also the entry point for all the REST commands used to control the cluster. That means several different tools and libraries can easily communicate with it.
- etcd storage
The etcd is a simple, lightweight, distributed key-value store that can be distributed across multiple nodes. The etcd storage was developed by the CoreOS team to be mainly used for shared configuration and service discovery. Kubernetes uses etcd to store configuration data that can be used by each of the nodes in the cluster
The scheduler component configures pods and services onto the nodes. Moreover, the scheduler is also responsible for tracking resource utilization on each host to make sure that workloads are not scheduled in excess of the available resources.
The controller manager service is a general service that is responsible for controllers that regulate the state of the cluster and perform routine tasks. An example of such a controller is the replication controller. As it ensures that the number of replicas defined for service matches the number currently deployed on the cluster. The details of these operations are written to etcd, where the controller manager watches for changes through the API server.
Nodes are the servers that perform work in Kubernetes. It was previously known as a minion. A node can be a virtual machine or a physical machine, depending on the cluster. Every single node has the services necessary to run pods and is managed by the master components. The services on a node include:
Docker is responsible for downloading the images and starting the containers. It runs on the encapsulated application containers in a lightweight operating environment. Each unit of work is implemented as series of containers that must be deployed.
kubelet gets the configuration of a pod from the API server and ensures that the described containers are up and running. This is the worker service that’s responsible for communicating with the master node. It is responsible for relaying information to and from the control plane services, as well as interacting with the etcd store to read configuration details or write new values.
Kube-proxy runs on each node to deal with individual host sub-netting and ensure that the services are available to external parties. It serves as a network proxy and a load balancer for a service on a single worker node and manages the network routing for TCP and UDP packets.
Must know terminologies of Kubernetes are:
- Pods– Pods are a collection of one or more containers. It acts as a Kubernetes’ core unit of management. Pods set the logical boundary for containers sharing the same context and resources.
- Labels– Labels are arbitrary tags that can be placed on the above work units to mark them as a part of a group. These can then be selected for management purposes and action targeting.
- Services– Services is a unit that acts as a basic load balancer and ambassador for other containers. A service group together logical collections of pods that perform the same function to give an impression of a single entity.
- Replication Controller– A more complex version of pod is known as a replicated pod. These are handled as a type of work unit known as a replication controller. Replication controllers make sure that a specific number of pod replicas are running at any one time.
Benefits of Kubernetes
Kubernetes is designed in a way that provides scalability, availability, security, and portability. Reliability is another main benefit of Kubernetes and can be used to prevent failure from impacting the availability or performance of the application. Moreover, Kubernetes enables the users to respond efficiently to customers’ demand by scaling or rolling out new innovative features. It is designed in such a way that it offers freedom of choice when choosing operating systems, container runtimes, processor architectures, cloud platforms and PaaS. It also improves the cost of infrastructure by effectively dividing the workload across available resources. This shows that while other technologies are doing a commendable job at handling the cluster aspect, Kubernetes is providing a better management system.
You can read all about Kubernetes and know everything that it involves in our all-encompassing post here!