This document describes networking pre-requisites for a BareOS PMK cluster. It is part of the overall Quick Setup Guide for PMK. If you haven’t already, we recommend starting with that guide first.
The linux nodes that run PMK need to allow several kinds of incoming network access. They also need to access several external services to handle updates and resource downloads.
Following are the networking prerequisites for the linux nodes for your PMK cluster:
- Each node should have at least one physical (or VLAN backed) NIC with an IP address. All nodes in the cluster should be able to communicate with each other over the NIC.
- The cluster will also require two unused IP subnets. Make sure that the subnets are not in use by any of your internal network. The subnets are specified in CIDR form as part of cluster creation, and are referred to as Containers CIDR and Services CIDR.
- In general, you should not configure your network equipment to route or otherwise be aware of those subnets. Kubernetes uses the first network range to route packets between pods or containers in a cluster. The network mask is subdivided into two portions: the intra-node portion determines how many Kubernetes pods can run on a single node, and the inter-node portion determines the maximum number of nodes in a cluster. By default, the intra-node portion is 8 bits, i.e. up to 256 pods per node. So a network mask of 12 bits would allow clusters to have up to 16 nodes. For example, a new cluster named DevCluster is created with Containers CIDR=10.20.0.0/16 and Services CIDR=10.21.0.0/16
A node in a PMK cluster will access following types of data sources during cluster creation:
- CentOS yum repository
- Docker yum repository
- Public docker registries from Docker, Inc. and Google (Kubernetes project)
Network Port Configurations
All Kubernetes master nodes must be able to receive incoming connections on the following ports:
Protocol Port Range Purpose TCP 443 Requests to Kubernetes API from worker nodes and external clients (eg kubectl) TCP 2379-2380, 4001 Etcd cluster specific traffic between master nodes
All Kubernetes master and worker nodes must be able to receive incoming connections on the following ports.
Protocol Port Range Purpose TCP 10250 Requests from master and worker nodes to the kubelet API for exec and logs TCP 10255 Requests from master and worker nodes to read-only kubelet API TCP 10256 Requests from master and worker nodes to kubeproxy TCP 4194 Requests from master and worker nodes to cAdvisor TCP 30000-32767 Requests from external clients (eg kubectl) to default port range for [NodePort Services](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport)
Worker nodes must also be able to receive incoming connections on ports used by any Kubernetes add-ons or applications that you install on those nodes. For instance, installing prometheus instance for monitoring will require specific ports to be opened. Refer to the documentation specific to the application being installed for more details.
Network Plugin-specific Prerequisites
In addition to these generic networking prerequisites, you will need to follow prerequisites specific to the specific CNI plugin you plan to use with your Kubernetes clusters. See CNI Integrations for more info.