Managed Kubernetes Prerequisites
The article describes the prerequisites for preparing a pool of x86 (64-bit) nodes (machines) for Platform9 Managed Kubernetes.
Once the nodes are provisioned, you can create multiple Kubernetes clusters. A node can be detached from or attached to one cluster at a time.
Although a Kubernetes cluster can be as small as one node, Platform9 recommends a minimum of two or three nodes per cluster.
Platform9 supports the node operating systems, CentOS(64-bit), RHEL 7.2+ (64-bit), and Ubuntu LTS 16 (64-bit).
The prerequisites can be classified into operating system related prerequisites, disk space prerequisites, RAM prerequisites, networking prerequisites, and miscellaneous prerequisites.
Operating System related prerequisites
Following are the operating system related prerequisites.
- Install the latest version of the operating system using a base or minimal package set.
- Update all packages to receive the latest security and bug fixes.
- For Ubuntu, install required dependencies.
For CentOS or RHEL 7, run the following command to update all installed packages.
Read the related document Preparing a CentOS or RHEL 7 system for running Platform9 Managed Kubernetes for further instructions on preparing a CentOS or RHEL 7 node.
Install Required Dependencies and Update Ubuntu
For Ubuntu, run the following command to install required dependencies and update all installed packages.
apt-get update \ apt-get install curl uuid-runtime \ apt-get upgrade
Disk Space Prerequisites
- Each node should generally have at least 40 GB of free disk space. In the case of CentOS, be careful to leave some of that space unallocated by file systems.
- The /var/lib directory should exist on a file system with at least 30 GB of free disk space.
The recommended RAM requirements are a minimum of 16 GB of RAM and 1 core per 4 GB of RAM.
- Each node should have at least one physical (or VLAN backed) NIC with an IP address. All nodes in the cluster should be able to communicate with each other over the NIC.
For each Kubernetes cluster that you plan to create, you must specify two unused IP subnets that are not in use by your internal network. The subnets are specified in CIDR form, and are referred to as Containers CIDR and Services CIDR.
In general, you should not configure your network equipment to route or otherwise be aware of those subnets. Kubernetes uses the first network range to route packets between pods or containers in a cluster. The network mask is subdivided into two portions: the intra-node portion determines how many Kubernetes pods can run on a single node, and the inter-node portion determines the maximum number of nodes in a cluster. By default, the intra-node portion is 8 bits, i.e. up to 256 pods per node. So a network mask of 12 bits would allow clusters to have up to 16 nodes. For example, a new cluster named DevCluster is created with Containers CIDR=10.20.0.0/16 and Services CIDR=10.21.0.0/16
The nodes should have direct Internet access. If access through a proxy is required, contact your Platform9 representative for additional instructions.
During Platform9 configuration of container-related software, the following types of data sources are accessed.
- CentOS yum repository
- Docker yum repository
- Public Docker registries from Docker, Inc. and Google (Kubernetes project)
- Pods and containers are generally assumed to be stateless. If your workloads need to access important data that needs to be persisted, the best practice is to attach shared storage volumes to your pods. A separate iSCSI-capable array or NFS server can satisfy this need.
If Masters and Workers run in a restrictive network environment, ensure the following.
Masters must be able to receive incoming connections on the following ports.
Protocol Port Range Source Purpose TCP 443 Workers and Clients Kubernetes API TCP 2379-2380, 4001 Masters etcd UDP 8285 Masters and Workers flannel
Workers must be able to receive incoming connections on the following ports. Workers must be also be able to receive incoming connections on reserved ports used by Kubernetes add-ons. For instance, an ingress controller that listens on TCP port 80.
|TCP||10250||Masters and Workers||Kubelet API for exec and logs|
|TCP||10255||Masters and Workers||Read-only Kubelet API|
|TCP||10256||Masters and Workers||kubeproxy|
|TCP||4194||Masters and Workers||cAdvisor|
|TCP||30000-32767||Application Clients||Default port range for NodePort Services|
|UDP||8285||Masters and Workers||flannel|
Swap must be disabled on the host. To disable swap, refer to Disabling Swap on a Kubernetes Node.