Kubernetes Overview And Concepts

Before you can begin to understand how Platform9 Managed Kubernetes (PMK) works, you need to have a basic understanding of Kubernetes - an open source orchestration system for managing containerized workloads and services.

This article describes concepts related to Kubernetes that are important to understand in order to understand PMK.

About Kubernetes

When building your distributed micro-services based applications using containers in a production environment, you need to ensure a business SLA for uptime and availability of the containers. If the volume of users using your application increases, the application must scale to meet the additional requirement. If a container in your application goes down, it needs to be replaced with another container.

Kubernetes provides you with a framework to run these distributed micro-service based applications resiliently. It takes care of scaling of containers, failover, deployment patterns, and more. This framework has many benefits, including:

  1. Scalability - Kubernetes enables you to build complex containerized applications and deploy them globally across a cluster of servers, as it optimizes resources according to your desired state. Kubernetes can scale your containerized applications horizontally, by monitoring for container health, and by triggering application scaling based on demand and container resource utilization.

  2. Portability - Kubernetes lets you orchestrate containerized workloads consistently in different environments - across on-premises infrastructure and public clouds. This means you can seamlessly move workloads from local machines to a data center or cloud.

  3. Open-source model and extensibility - Kubernetes is an open-source platform that developers can use, and extend, without concerns of lock-in. As a user of Kubernetes, you can access a wide and ever-growing collection of extensions and plugins for Kubernetes, created by the developers and companies that form the Kubernetes community.

Refer to the Kubernetes official documentation for more information about Kubernetes architecture and benefits.

About Docker

Docker is a container packaging and runtime standard that enables the creation and use of Linux containers. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

About Docker Registries

A Docker registry is a storage and content delivery system, holding named Docker images, available in different tagged versions. A Docker registry is organized into Docker repositories, where a repository holds all the versions of a specific image. Users interact with a registry by using docker push and pull commands.

A public registry such as the one on Docker Hub is hugely popular and helpful for publicly available and open-source Docker images. However, for your company’s proprietary images, you will likely want to have a private registry.

What are Kubernetes Clusters

A cluster is the foundation of Kubernetes. A Kubernetes cluster consists of at least one cluster master and multiple worker machines called ‘nodes’. These master and worker nodes run the Kubernetes cluster orchestration system. The Kubernetes objects that represent your containerized applications all run on top of a cluster.

Cluster Master Nodes

The cluster master node runs the Kubernetes control plane processes, including the Kubernetes API server, scheduler and core resource controllers. All interactions with the cluster are done via Kubernetes API calls, and the Kubernetes API server process on the master handles those requests. You can make Kubernetes API calls directly via HTTP, or indirectly by running commands from the Kubernetes command-line client (kubectl) or interacting with the UI in the Kubernetes dashboard.

The cluster master’s API server process is the hub for all communication for the cluster. All internal cluster processes - such as the cluster nodes, systems components, and application controllers, all act as clients of the API server. The API server is the single ‘source of truth’ for the entire cluster.

Cluster Worker Nodes

The cluster worker nodes provide resources to run your containerized workloads. The master nodes are responsible for deciding what runs on all of the cluster’s nodes. This includes scheduling workloads and managing the workloads’ lifecycle, scaling, and upgrades. The master also manages network and storage resources for those workloads.

The worker nodes communicate with the masters using Kubernetes APIs.