Overview and Architecture

Kubernetes supports creation of clusters using physical or virtual nodes in private data centers or public cloud environments. Depending on your use cases and application performance needs, building a Kubernetes cluster using physical nodes might be desirable.

Platform9 Managed Kubernetes supports deployment of highly available multi-master clusters using physical nodes in your on premises data centers.

Following diagram describes the overall architecture for a multi-master bare metal cluster:

Architecture diagram

Virtual IP Addressing with VRRP

Multi-master Cluster

Managed Kubernetes uses the Virtual Router Redundancy Protocol (VRRP) with Keepalived to provide a virtual IP (VIP) that fronts the active master node in a multi-master Kubernetes cluster. At any point in time, the VRRP protocol associates one of the master nodes with the virtual IP to which the clients (kubelet, users) connect.

During cluster creation, Managed Kubernetes will bind the virtual IP to a specific physical interface on the master node, specified by the admin during cluster creation. The Virtual IP should be reachable from the network that the specified physical interface connects to. The label for the specified physical interface, such as eth0, for example, must be provided by the user while creating the cluster, and every master must have the same label for the interface to be bound to the virtual IP.

When the cluster is running, all client requests for Kubernetes API server are sent to a single master, the one that is currently mapped to the Virtual IP. If that master goes down, the VRRP protocol remaps the Virtual IP to another master, making that node the target of all new client requests.

Hence for high availability of your clusters, it is recommended to design your clusters with three or five master nodes.

Etcd cluster configuration

In a multi-master cluster, Platform9 runs an instance of etcd in each of the master nodes. For the etcd cluster to be healthy, there must be a quarum number of etcd nodes up and running all the time (for eg 2 out of 3 masters should be up and running). Losing quarum will result in a non-functional etcd cluster, thus causing the Kubernetes cluster to also not function. Thus it’s recommended to create your production clusters with at least 3 or 5 master nodes.