Creating Kubernetes Cluster on Platform9 Managed OpenStack
Platform9 Managed Kubernetes supports creation of Kubernetes clusters on Platform9 Managed OpenStack through the Platform9 Clarity UI.
You can, optionally, create Kubernetes clusters atop Platform9 Managed OpenStack such that the cluster nodes boot from Cinder volumes.
Kubernetes clusters that boot from Cinder volumes can be used with underlying storage arrays or data storage solutions from data storage vendors such as Dell EMC or NetApp.
However, creating clusters whose nodes boot from Cinder volumes may not always be required.
Why Boot Cluster from Cinder Volume?
Kubernetes clusters that boot from a Cinder volume can be used in the following scenarios.
- Cloud apps require high storage performance, and involve high amount of read and write operations.
- Cloud apps require good backup performance.
- Hosts acting as OpenStack hypervisors do not have high local disk capacity, and hence require to use a storage backend.
When you create a Kubernetes cluster that boots from a Cinder volume, a Cinder volume is automatically created from the image chosen under Cluster Configuration during the cluster creation process.
Before you can create a Kubernetes cluster atop an OpenStack cloud, ensure that the following OpenStack resources that are to be used by the Kubernetes cluster are present on Platform9 Managed OpenStack.
- Image to be used for the Kubernetes nodes
- Flavors to be used for master nodes and worker nodes
- Provider networks (KVM only)
- Security groups
Additionally, the following must have been created with Platform9 Managed Kubernetes, before creating the cluster.
- Cloud provider details related to the OpenStack cloud
Create Kubernetes Cluster
Follow the steps given below to create a Kubernetes cluster with Platform9 Clarity UI on an OpenStack cloud.
- Click Kubernetes>Infrastructure>Clusters>Add Cluster.
- Select the Cloud Provider.
- Enter the name for the cluster in Name.
- Select the Region for the cluster.
- Select KVM as Region Type.
- Click Next.
- Select the image name in Image.
- Select Master Node Instance Flavor.
- Select Worker Node Instance Flavor.
- Select Number of Master Nodes.
Note: Platform9 recommends that you deploy your production setup on a multi-master cluster. You can create multi-master clusters if load balancing as a service (LBaaS) is enabled on Platform9 Managed OpenStack. If you do not have LBaaS enabled, contact Platform9 support. Platform9 currently only supports LBaaS with Avi networks.
- Enter Number of Worker Nodes.
- Select the Disable Workloads on Master Nodes check box, if you wish to disable deployment of workloads on master nodes. This is a recommended step to maintain the stability of the cluster.
- Select the Boot from Volume check box, if you wish to boot the nodes in the cluster from a Cinder volume. This is an optional step.
- Enter the Master Volume Size and the Worker Volume Size in GB, if you have selected the Boot from Volume check box. 50 GB is the default value for both Master Volume Size and Worker Volume Size.
- Click Next.
- Enter the network information, based on the table given below.
Field Description Network Select the network to deploy the cluster on. Subnet Select the subnet to deploy the cluster on. Security Group The security group or groups for the cluster. Enter a comma-separated list of security groups if you are using more than one group. Containers CIDR The IP range that Kubernetes uses to configure the Pods (Docker containers) deployed by Kubernetes Services CIDR The IP range that Kubernetes uses to configure services deployed by Kubernetes HTTP Proxy Select the check box if you want to use an HTTP proxy server for the cluster. If you select this check box, you must specify the IP address and port number of the HTTP proxy server in the text area that appears below the HTTP proxy check box, in the <scheme>://<username>:<password>@<host>:<port> format. The <username>: <password>@ is optional in the HTTP proxy string.
- Click Next.
Enter the advanced configuration details, based on the table given below.
Warning: You must have an in-depth knowledge of the Kubernetes API to be able to correctly use the Advanced API configuration option. If the advanced APIs are inappropriately configured, it could lead to the cluster working incorrectly or the cluster being inaccessible.
Field Description SSH Key Select an SSH key to be associated with the nodes. The SSH key can be used to access the nodes for debugging purpose. Privileged Select the check box to enable the cluster to run privileged containers. Advanced API Configuration Select the check box to configure the APIs to be used by the cluster. If you do not have adequate knowledge of Kubernetes APIs, it is recommended to avoid selecting this check box. When this check box is not selected, the GA and beta APIs (that is the stable APIs) for the currently installed Kubernetes version are enabled. Default API groups and versions This option is visible only if you select the Advanced API Configuration check box. Select the Default API groups and versions option to enable on the cluster, the default APIs based on the Kubernetes installation in your environment. All API groups and versions This option is visible only if you select the Advanced API Configuration check box. Select All API groups and versions option to enable on the cluster, all alpha, beta, and GA versions of Kubernetes APIs that have been published till date. Custom API groups and versions This option is visible only if you select the Advanced API Configuration check box. Select Custom API groups and versions option to specify one or more API versions that you wish to enable and/or disable. Enter the API versions in the text area following the Custom API groups and versions option. For example, to enable Kubernetes v1 APIs, enter the expression,
api/v1=true. Similarly, to disable Kubernetes v2 APIs, enter the expression,
api/v2=false. If you want to enable and/or disable multiple versions, you could enter comma-separated expressions, such as,
- Select the Enable Application Catalog check box, if you want to deploy applications using the Kubernetes package manager, Helm, on the cluster. This is an optional step.
- Click Next.
- Review the cluster configuration, and then click Create Cluster.
The cluster is created on the specified OpenStack cloud.
You can deploy your applications on the newly created Kubernetes cluster.