# Create Multi Master Cluster

This document describes creation of a highly available multi-master on-premises BareOS Kubernetes cluster using PMK. We recommend reading [What is BareOS](https://github.com/platform9/pcd-docs-gitbook/blob/main/other-docs/pmk/5.13/bareos-what-is-bareos/docs/kubernetes/bareos-what-is-bareos/README.md) for an understanding of BareOS and [BareOS Cluster Architecture](https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/multimaster-architecture-platform9-managed-kubernetes/README.md) before proceeding with this document.

A highly available cluster is composed of at least 3 master nodes, each running a member of the [etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd) distributed database along with other Kubernetes [control plane components](https://kubernetes.io/docs/concepts/overview/components/#control-plane-components) (i.e. `kube-apiserver`, `kube-controller-manager`, and `kube-scheduler`). We choose an [odd number of master nodes](https://etcd.io/docs/v3.3/faq/#why-an-odd-number-of-cluster-members) so that it is possible to establish quorum within the etcd nodes and maintain [fault tolerance](https://etcd.io/docs/v3.3/faq/#what-is-failure-tolerance).

{% hint style="success" %}
**Reserved IP Address Range Requirement**

To deploy a multi-master cluster a reserved IP address is required for use as the highly available API Server endpoint, this is the Cluster Virtual IP.

Tp deploy MetalLB a reserved IP range is required for provisioning IPs to Kubernetes Services.

**Example:** 10.128.159.240-253 as a Reserved IP range for all components, Workers, Masters, Virtual IP and MetalLB

* Master Virtual IP 10.128.159.240
* Master 01 10.128.159.241
* Master 02 10.128.159.242
* Master 03 10.128.159.243
* Workers
* Worker01 10.128.159.246
* Worker02 10.128.159.247
* Worker03 10.128.159.248
* MetalLB Range
* Starting IP 10.128.159.250 – Ending IP10.128.159.253
  {% endhint %}

## Create BareOS Cluster Using UI

Follow the steps given below to create a BareOS Kubernetes cluster using the PMK UI.

* Login to the UI with either your Local Credentials or Single Sign On (Enterprise).
* Select BareOS Virtual Machine or BareOS Physical Servers option depending on where you are creating your BareOS cluster.
* Click on 'Multi-master Cluster' button in the wizard
* Under the Initial Setup page of the wizard, fill in the required options using the table below. Then proceed to Next step.

|                                      | Option                            | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| ------------------------------------ | --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Cluster Settings**                 |                                   |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
|                                      | Kubernetes Version                | Select the `Kubernetes Version` from the [list of supported Kubernetes versions](https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/support-matrix/README.md).                                                                                                                                                                                                                                                                                                                                                                     |
| **Application & Container Settings** |                                   |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
|                                      | Privileged Containers             | Select the checkbox to enable the cluster to run privileged containers. Note that being able to run **privileged containers** within the cluster is a **prerequisite** if you wish to enable **service type load balancer using MetalLB**. By default a container is not allowed to access any devices on the host, but a 'privileged' container is given access to all devices on the host. For more information, see [Privileged Policy Reference](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged)                   |
|                                      | Make Master nodes Master + Worker | Opt to schedule workloads onto the master nodes, or, deploy only the necessary control plane services and cordon the masters otherwise.                                                                                                                                                                                                                                                                                                                                                                                                            |
| **Cluster Add-Ons**                  |                                   |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
|                                      | Enable ETCD Backup                | Configures automated etcd backups                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
|                                      | Deploy MetalLB + Layer2 Mode      | <p>MetalLB is a software load balancer that is deployed and managed by PMK. MetalLB will be automatically attached to the cluster and allow services to be deployed using the LoadBalancer service type, simplifying the steps required to make applications accessible outside of the cluster.<br><br><strong>Requirements:</strong> MetalLB requires a reserved network address range. MetalLB will manage the IP range to expose Kubernetes services<br><br><strong>Example:</strong> Starting IP 10.128.159.250 – Ending IP 10.128.159.253</p> |
|                                      | Monitoring                        | Enable Prometheus monitoring for this cluster. Learn more here [auto$](https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/in-cluster-monitoring/README.md)                                                                                                                                                                                                                                                                                                                                                                         |
|                                      | KubeVirt                          | Enable the cluster to support running virtual machines. Learn more here [auto$](https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/platform9-managed-kubevirt-overview/README.md)                                                                                                                                                                                                                                                                                                                                                  |
|                                      | Network Plugin Operator           | Enable advance networking options via the PMK Luigi network operator. Learn more here [auto$](https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/luigi-network-operator-quickstart/README.md)                                                                                                                                                                                                                                                                                                                                      |
| **ETCD Backup Configuration**        |                                   |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
|                                      | Storage Path                      | <p>The path on the master node where etcd data will be stored.<br><br><strong>Requirement:</strong> The path must be created and available on all master nodes</p>                                                                                                                                                                                                                                                                                                                                                                                 |
|                                      | Backup Interval (Minutes)         | Controls backup frequency.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| **MetalLB Configuration**            |                                   |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
|                                      | Address Range                     | <p>IP address range for MetalLB. MetalLB will use this IP pool when allocating new instance of service type LoadBalancer for your applications.<br><br>Example\*\*:\*\* Starting IP 10.128.159.250 – Ending IP 10.128.159.253</p>                                                                                                                                                                                                                                                                                                                  |

* The 'Master Nodes' section of the wizard will initially be empty. This is because you need to download and install the Platform9 CLI on your nodes, so they can connect to the PMK management plane and show up in your wizard.
* To achieve this, download and install the PMK CLI on at least one of your nodes by running the following command on the node.

{% tabs %}
{% tab title="Download CLI Setup Script" %}

```bash
bash <(curl -sL https://pmkft-assets.s3-us-west-1.amazonaws.com/pf9ctl_setup)
```

{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="CLI Setup" %}

```bash
____  _       _    __                      ___
|  _ \| | __ _| |_ / _| ___  _ __ _ __ ___ / _ \
| |_) | |/ _` | __| |_ / _ \| '__| '_ ` _ \ (_) |
|  __/| | (_| | |_|  _| (_) | |  | | | | | \__, |
|_|   |_|\__,_|\__|_|  \___/|_|  |_| |_| |_| /_/

Note: SUDO access required to run Platform9 CLI.
      You might be prompted for your SUDO password.

Downloading Platform9 CLI binary...

Platform9 CLI binary downloaded.

Installing Platform9 CLI...

Platform9 CLI installation completed successfully !

To start building a Kubernetes cluster execute:
        pf9ctl help
```

{% endtab %}
{% endtabs %}

The command will install the CLI

Now run the `CLI config set` command to configure the CLI to connect to and use your PMK deployment.

{% tabs %}
{% tab title="CLI Configuration" %}

```bash
pf9ctl config set
```

{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="CLI Configuration Example" %}

```bash
Platform9 Account URL: https://__ACCOUNT__.platform9.net
Username: demo@platform9.net
Password:
Region [RegionOne]: k8s
Tenant [service]: demo
✓ Stored configuration details Succesfully
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**Info**

**Note:** For PMK Enterprise users, specify the right value for the Region and Tenant within which you are creating your cluster. (Specified by the drop down selectors at the top right of your PMK UI nav bar)
{% endhint %}

Now run the CLI prep-node command to prepare your nodes with required prerequisites.

{% tabs %}
{% tab title="Prepare Node" %}

```bash
pf9ctl prep-node
```

{% endtab %}
{% endtabs %}

The [prep-node](https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/pmk-cli-prep-node/README.md) command will perform [prerequisites](https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/bareos-networking-prerequisites/README.md) checks on your node. If any checks fail, you will receive an output similar to the following.

{% tabs %}
{% tab title="Prepare Node - Pre-Requisite Check(s) Failed" %}

```bash
✓ Loaded Config Successfully
✓ Missing package(s) installed successfully
✓ Removal of existing CLI
✓ Existing Platform9 Packages Check
✓ Required OS Packages Check
✓ SudoCheck
✓ CPUCheck
x DiskCheck - At least 30 GB of total disk space and 15 GB of free space is needed on host. Disk Space found: 2 GB
x MemoryCheck - At least 12 GB of memory is needed on host. Total memory found: 4 GB
✓ PortCheck
✓ Existing Kubernetes Cluster Check

✓ Completed Pre-Requisite Checks successfully

Optional pre-requisite check(s) failed. Do you want to continue? (y/n)
```

{% endtab %}
{% endtabs %}

{% hint style="warning" %}
**Warning**

It is **highly recommended** that you meet all of the optional pre-requisites or else you may experience degraded performance among scheduled pods and/or other unforeseen issues.
{% endhint %}

{% hint style="danger" %}
**Failed to Prepare Node**

If you encounter the error: `Failure to prepare node`, please review the `pf9ctl` log file for additional context.

**Enterprise** – Please submit a [Support Request](https://support.platform9.com/hc/en-us/requests/new?ticket_form_id=360000924873) with the log attached and our team will review and work with you to onboard the node.
{% endhint %}

{% tabs %}
{% tab title="Bash" %}

```bash
Failed to prepare node. See /root/pf9/log/pf9ctl-20210330.log or use --verbose for logs
```

{% endtab %}
{% endtabs %}

Once the prep-node command succeeds, you will see output similar to below. Your current node is now prepared with the required software packages.

{% tabs %}
{% tab title="Bash" %}

```bash
✓ Loaded Config Successfully
✓ Missing package(s) installed successfully
✓ Removal of existing CLI
✓ Existing Platform9 Packages Check
✓ Required OS Packages Check
✓ SudoCheck
✓ CPUCheck
✓ DiskCheck
✓ MemoryCheck
✓ PortCheck
✓ Existing Kubernetes Cluster Check

✓ Completed Pre-Requisite Checks successfully

✓ Disabled swap and removed swap in fstab
✓ Hostagent installed successfully
✓ Initialised host successfully
✓ Host successfully attached to the Platform9 control-plane
```

{% endtab %}
{% endtabs %}

The CLI can also be used to prepare other remote nodes, as long as the node that the CLI runs on can connect to those nodes via SSH. To accomplish this, you will need to specify the SSH username and password to connect to the remote nodes and a list of IP addresses for the remote nodes to be prepared using the -i option. The following example shows running the CLI to prepare the current node and two other remote nodes. In these examples, all the remote nodes have the same username and password for SSH access.

{% tabs %}
{% tab title="Bash" %}

```bash
pf9ctl cluster prep-node -u testuser -p testpassword -s ~/.ssh/id_rsa -i localhost -i 150.20.7.65 -i 150.20.7.66
```

{% endtab %}
{% endtabs %}

* Once you prepare all nodes using the CLI, these nodes will show under the `Nodes` tab in the PMK UI.
* Switch to your cluster creation wizard and proceed to the Next step. You should now see the nodes you just prepared as available to select as master nodes.
* Select at least 3 nodes as master nodes for your cluster. Then proceed to the Next step.

{% hint style="warning" %}
**Insufficient Nodes**

A multi-master cluster requires at least **3** master nodes. If you are yet to authorize at least **3** nodes, you will be unable to proceed until additional nodes have been authorized.
{% endhint %}

* Under the Worker Nodes step of the wizard, select one or more worker nodes for your cluster. Then proceed to Next step.
* Under the Network step, configure networking for your cluster based on the table below. Then proceed to Next step.

|                                           | **Field**                                     | **Value**                                                                                                                                                                                                                                                                                                                                  |
| ----------------------------------------- | --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Cluster API FQDN**                      |                                               |                                                                                                                                                                                                                                                                                                                                            |
|                                           | API FQDN                                      | The FQDN (DNS Name) that is to be used to access the Kubernetes cluster API server from outside of the cluster.                                                                                                                                                                                                                            |
| **Cluster Virtual IP Configuration**      |                                               |                                                                                                                                                                                                                                                                                                                                            |
|                                           | Virtual IP Address for Cluster                | <p><code>Required</code> for a multi-master cluster.<br><br>The reserved IP address (or high availability floating IP address) with which the user will access the cluster. See <a href="https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/multimaster-architecture-platform9-managed-kubernetes/README.md">auto$</a></p> |
|                                           | Physical Interface for Virtual IP Association | <p><code>Required</code> for a multi-master cluster<br><br>The network interface to which the virtual IP gets associated. Ensure that the virtual IP specified above is accessible on this network interface, and that all master nodes use the same interface name for the interface to be associated with the virtual IP.</p>            |
| **Cluster Networking Range & HTTP Proxy** |                                               |                                                                                                                                                                                                                                                                                                                                            |
|                                           | Containers CIDR                               | <p><code>Required</code><br><br>The IP range that Kubernetes uses to configure the Pods (containers) deployed by Kubernetes.</p>                                                                                                                                                                                                           |
|                                           | Services CIDR                                 | <p><code>Required</code><br><br>The IP range that Kubernetes uses to configure services deployed by Kubernetes</p>                                                                                                                                                                                                                         |
|                                           | HTTP Proxy                                    | If your on-premises network uses an http proxy, please specify the details here.                                                                                                                                                                                                                                                           |
| **Cluster CNI**                           |                                               |                                                                                                                                                                                                                                                                                                                                            |
|                                           | Network Backend                               | The CNI networking backend to be used for this cluster.                                                                                                                                                                                                                                                                                    |
|                                           | IP in IP Encapsulation Mode (Calico)          | Encapsulation mode for Calico CNI. See [auto$](https://github.com/platform9/pcd-docs-gitbook/blob/main/kubernetes/networking-integration-with-calico/README.md) for a better understanding of the advance parameters.                                                                                                                      |
|                                           | Interface Detection Method (Calico)           | Advance networking options for Calico CNI.                                                                                                                                                                                                                                                                                                 |
|                                           | NAT Outgoing (Calico)                         | NAT mode for Calico CNI                                                                                                                                                                                                                                                                                                                    |

* Under the Advanced Step, enter the advanced configuration details for your cluster based on the table below.

{% hint style="warning" %}
**Warning**

You must have an in-depth knowledge of the Kubernetes API to be able to correctly use the Advanced API configuration option. If the advanced APIs are inappropriately configured, it could lead to the cluster working incorrectly or being inaccessible.
{% endhint %}

| Field                      | Option                          |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| -------------------------- | ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Advanced API Configuration |                                 |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
|                            | Default API groups and versions | <p>Select the Default API groups and versions option to enable<br>on the cluster, the default APIs based on the Kubernetes installation in<br>your environment</p>                                                                                                                                                                                                                                                                                                                                                                                                                       |
|                            | All API groups and versions     | <p>Select All API groups and versions option to enable on the<br>cluster, all alpha, beta, and GA versions of Kubernetes APIs that have<br>been published till date.</p>                                                                                                                                                                                                                                                                                                                                                                                                                 |
|                            | Custom API groups and versions  | <p>Select Custom API groups and versions option to specify one<br>or more API versions that you wish to enable and/or disable. Enter the<br>API versions in the text area following the Custom API groups and<br>versions option.<br>For example, to enable Kubernetes v1 APIs, enter the expression,<code>api/v1=true</code>.<br>Similarly, to disable Kubernetes v2 APIs, enter the expression, <code>api/v2=false</code>.<br>If you want to enable and/or disable multiple versions, you could enter comma-separated expressions, such as, <code>api/v2=false,api/v1=true</code>.</p> |

* Optionally add any metadata tags to your cluster.
* Review the cluster details.
* Click `Complete` to finalize and deploy the cluster!

You can now deploy your applications on the highly available multi-master Kubernetes cluster.

## Create BareOS Cluster Using REST API

For advanced users, you can automate the process of creating a multi-master BareOS Kubernetes cluster by integrating with our [Qbert API](https://github.com/platform9/pcd-docs-gitbook/blob/main/other-docs/pmk/5.13/bareos-what-is-bareos/learn/qbert-cli/README.md).
