# Pre-requisites

This document outlines the prerequisites for deploying the  <code class="expression">space.vars.self\_hosted\_product\_name</code>.

## Management Cluster

As part of the installation process, the <code class="expression">space.vars.self\_hosted\_product\_name</code> creates a **Kubernetes cluster** using the physical servers that you use to deploy it on. We refer to this cluster as the **management cluster**. The <code class="expression">space.vars.product\_name</code> management plane then runs as a set of Kubernetes pods and services on this management cluster.

Single-node deployments are currently not supported for  <code class="expression">space.vars.self\_hosted\_product\_name</code>. The minimum supported configuration requires 3 servers to ensure high availability and proper operation of the Kubernetes management cluster. For development or testing purposes, contact your Platform9 support for alternative deployment options.

The following is the recommended capacity for the management cluster, based on the projected scale of your <code class="expression">space.vars.product\_name</code> deployment. These configurations assume production deployments with high availability requirements.

| Hypervisors You Plan to Use           | Minimum Capacity                                                                                                          | Recommended Capacity                                                                                                     |
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| <p>Small<br><br>(<20 hosts)</p>       | <p>3 servers, each with:<br><br>14 vCPUs, 28GB RAM and 250GB SSD</p>                                                      | <p>4 servers, each with:<br><br>16 vCPUs, 32GB RAM and 250GB SSD</p>                                                     |
| <p>Growth<br><br>(<100 hosts)</p>     | <p>4 servers, each with:<br><br>16 vCPUs, 32GB RAM and 250GB SSD</p>                                                      | <p>5 servers, each with:<br><br>16 vCPUs, 32GB RAM and 250GB SSD</p>                                                     |
| <p>Enterprise<br><br>(>100 hosts)</p> | <p>5 servers, each with:<br><br>16 vCPUs, 32GB RAM and 250GB SSD<br><br>1 additional server for every 100 Hypervisors</p> | <p>6 servers, each with:<br><br>24 vCPUs, 32GB RAM and 250GB SSD<br><br>1 additional server for every 50 Hypervisors</p> |

The above recommendation is for a single Management Plane region. For every extra region that needs to be deployed on the same Management Cluster, the capacity should be increased accordingly. It is recommended to have a separate management cluster in each geographical location, to avoid performance degradation and a single point of failure.

### Disk Partition Guidance

<code class="expression">space.vars.product\_name</code> normally runs on a single root filesystem. If your environment requires dedicated partitions for compliance or security, size the directories that <code class="expression">space.vars.product\_name</code> depends on: **/var**, **/opt**, and **/etc**. These directories hold logs, container data, PF9 components, and configuration files, so they must have enough space to support normal operations and upgrades.

Recommended Sizes for /var, /opt, and /etc

| Directory | Recommended | Minimum |
| --------- | ----------- | ------- |
| /var      | 140GB       | 80GB    |
| /opt      | 30GB        | 10GB    |
| /etc      | 2GB         | 1GB     |

### Server Configuration

Each physical server that you use to run as part of the management cluster should meet the following requirements:

**Operating System**: Ubuntu 22.04, Ubuntu 24.04

### **Swap config**

Make sure that each server has swap disabled. You can run the following command to do this.

{% tabs %}
{% tab title="Bash" %}

```bash
swapoff -a
```

{% endtab %}
{% endtabs %}

The above change will not survive a reboot; hence, it is recommended to update the `/etc/fstab` file and comment out the line that has the entry for the `swap` partition. e.g.

{% tabs %}
{% tab title="Bash" %}

```bash
UUID=aabbcc /               ext4    errors=remount-ro 0       1
UUID=xxyyzz /home           ext4    defaults        	0       2
UUID=mswmsw /media/windows  ntfs    defaults				  0       0

#/dev/sdb1 none swap sw 0 0   <--- comment out the line
```

{% endtab %}
{% endtabs %}

### **IPv6 support**

Ensure the below sysctl setting is set to 0, so that IPv6 support is enabled on the server.

{% tabs %}
{% tab title="Bash" %}

```bash
sysctl net.ipv6.conf.all.disable_ipv6
# If currently set to 1, change it to 0 as below:
echo net.ipv6.conf.all.disable_ipv6=0 >> /etc/sysctl.conf
sysctl -p
```

{% endtab %}
{% endtabs %}

### **Passwordless Sudo**

Many operations require sudo access (for example, installing Yum repositories, Docker, etc.). Please ensure that your server has passwordless sudo enabled.

**Kernel Panic Option**

Update the server configuration section to include a step for setting `kernel.panic=10`

{% tabs %}
{% tab title="YAML" %}

```yaml
echo "kernel.panic=10" >> /etc/sysctl.conf && sysctl -p
```

{% endtab %}
{% endtabs %}

### **SSH Keys**

* We rely on SSH to log in to the management cluster hosts and to install various components and manage them.
* Please generate ssh keys and sync them across all hosts of the management cluster.
* We recommend generating the key pair on one host and then adding the public key to all other hosts in their `~/.ssh/authorized_keys` file. This will enable every host in the management cluster to ssh into every other host.

{% tabs %}
{% tab title="Bash" %}

```bash
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh-copy-id -i ~/.ssh/id_rsa.pub root@test-3
```

{% endtab %}
{% endtabs %}

### **Package Updates**

* Install `cgroup-tools` :

{% tabs %}
{% tab title="Bash" %}

```bash
apt-get update -y && apt-get install cgroup-tools -y
```

{% endtab %}
{% endtabs %}

* Download and Update OpenSSL Version to 3.0.7 for Ubuntu 22.04:

{% tabs %}
{% tab title="Bash" %}

```bash
export AGENT_KEY=<YOUR_USER_AGENT_KEY>
# Download the OpenSSL package
curl --user-agent "${AGENT_KEY}" https://pf9-airctl.s3-accelerate.amazonaws.com/openssl-smcp-ubuntu/openssl_3.0.7-1_amd64.deb --output /tmp/openssl_3.0.7-1_amd64.deb
# Verify the MD5 checksum
md5sum /tmp/openssl_3.0.7-1_amd64.deb | grep 706caf || { echo "MD5 checksum does not match, exiting." && exit 1; }
# Install the OpenSSL package
sudo dpkg -i /tmp/openssl_3.0.7-1_amd64.deb || { echo "Failed to install OpenSSL, exiting." && exit 1; }
echo "/usr/local/ssl/lib64" | sudo tee /etc/ld.so.conf.d/openssl-3.0.7.conf
sudo ldconfig -v
# Create a symbolic link to the OpenSSL binary
sudo ln -sf /usr/local/ssl/bin/openssl /usr/bin/openssl
# Verify the OpenSSL version
openssl version | grep 3.0.7 || { echo "OpenSSL version does not match, exiting." && exit 1; }
```

{% endtab %}
{% endtabs %}

* User Agent Key For Installation

You will need a specific Platform9 user agent key for the installation of your self-hosted management plane. Your Platform9 sales engineer will share the key with you prior to the install.

## Networking

You will need 2 virtual IPs that are on the same L2 domain as the hosts in the management cluster.

* VIP #1: This is the IP where you can access the <code class="expression">space.vars.product\_name</code> management plane UI.
* VIP #2: This is used to serve the management Kubernetes cluster's API server.

## Storage

For a production setup of <code class="expression">space.vars.self\_hosted\_product\_name</code> you will need a Kubernetes Container Storage Interface (CSI) compatible storage for persisting the state of the management cluster. To know more, see [CSI and Kubernetes Storage](https://kubernetes.io/docs/concepts/storage/volumes/).

The Terrakube component of PCD AppCatalog requires persistent storage with multiple access (RWX) for sharing among Terrakube pods. Ensure a compatible RWX storage solution (like NFS, CephFS, or any CSI-compliant RWX provider) is available and configured in the Kubernetes cluster where Terrakube runs.

### Storage Class Customisation

Customers have the flexibility to utilize a custom storage provisioner, allowing them to modify the *storage class* and *disk size* for each component according to their specific deployment needs. This section is optional; feel free to skip it if the default configuration meets your requirements.

**Default Storage Classes**

| Component       | Default Storage Class          | Comment                  |
| --------------- | ------------------------------ | ------------------------ |
| RabbitMQ        | `pcd-sc`                       | NFS type storage         |
| MySQL           | Cluster default (hostpath-csi) |                          |
| OVN OVSDB NB    | pcd-sc                         | NFS type storage         |
| OVN OVSDB SB    | pcd-sc                         | NFS type storage         |
| Prometheus      | pcd-sc                         | NFS type storage         |
| Terrakube       | pcd-sc                         | Requires RWX access mode |
| Terrakube Redis | Cluster default (hostpath-csi) |                          |
| Grafana         | Cluster default (hostpath-csi) |                          |

To use the custom storage class provisioner, please follow these steps:

1. Place your custom storage class YAML files in the directory located at `/opt/pf9/airctl/conf/ddu/storage/custom/` . It is essential to have a storage class named `pcd-sc` included along with any other YAML files in this custom path.
2. Utilize `-p` or `--storage custom` in the `airctl configure` command. This will automatically apply all the YAML files found in the custom path.

*Example storage class yaml files:*

{% tabs %}
{% tab title="NFS Storage Class Example" %}

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: <STORAGE_CLASS_NAME>
mountOptions:
- nfsvers=4.1
- nolock
parameters:
  server: <NFS_SERVER_IP>
  share: <NFS_SHARE_PATH>
provisioner: nfs.csi.k8s.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
```

{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="Hostpath Provisioner Storage Class Example" %}

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: <STORAGE_CLASS_NAME>
parameters:
  storagePool: standard
provisioner: kubevirt.io.hostpath-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```

{% endtab %}
{% endtabs %}

Override storage class and disk size for specific component:

Edit the `/opt/pf9/airctl/conf/options.json` file to configure the settings specifically related to storage parameters. This adjustment empowers you to customize the storage class and disk size to meet your precise requirements.

Ensure that the storage class overrides specified in `options.json` are available in the custom storage path. If the storage class is not present, the deployment will fail.

{% tabs %}
{% tab title="Options.json" %}

```json
# Values here are given for example and not actual values.
{
  "chart_url": "<chart_url>",
  "terrakube_sc": "efs-sc",
  "terrakuberedis_sc": "block-sc",
  "rabbitmq_sc": "block-sc",
  "mysql_sc": "block-sc",
  "ovn_ovsdb_nb_sc": "efs-sc",
  "ovn_ovsdb_sb_sc": "efs-sc",
  "grafana_sc": "block-sc",
  "prometheusopenstack_sc": "efs-sc",

  "terrakube_disk_size": "11Gi",
  "terrakuberedis_disk_size": "2048Mi",
  "rabbitmq_disk_size": "6144Mi",
  "mysql_disk_size": "10Gi",
  "ovn_ovsdb_nb_disk_size": "11Gi",
  "ovn_ovsdb_sb_disk_size": "12Gi",
  "grafana_disk_size": "9Gi",
  "prometheusopenstack_disk_size": "9Gi"
}
```

{% endtab %}
{% endtabs %}

**Note:**

* Disk size units can be specified as Mi, Gi, or Ti.
* Decreasing disk size is **not supported**.
* Changing storage class name is supported only during installation, **not during upgrade.**
