Pre-requisites
This document outlines the prerequisites for deploying the Self-Hosted version of Private Cloud Director.
The following table provides the recommended number of servers you should allocate to run your Self-Hosted Private Cloud Director Instance.
Management Cluster
As part of the installation process, the Self-Hosted version of Private Cloud Director creates a Kubernetes cluster using the physical servers that you use to deploy it on. We refer to this cluster as the management cluster. The Private Cloud Director management plane then runs as a set of Kubernetes pods and services on this management cluster.
Single-node deployments are currently not supported for Self-Hosted Private Cloud Director. The minimum supported configuration requires 3 servers to ensure high availability and proper operation of the Kubernetes management cluster. For development or testing purposes, contact your Platform9 support for alternative deployment options.
The following is the recommended capacity for the management cluster, based on the projected scale of your Private Cloud Director deployment. These configurations assume production deployments with high availability requirements.
Hypervisors You Plan to Use
Minimum Management Cluster Capacity
Recommended Management Cluster Capacity
Small (<20 hosts)
3 servers, each with: 14 vCPUs, 28GB RAM and 250GB SSD
4 servers, each with: 16 vCPUs, 32GB RAM and 250GB SSD
Growth (<100 hosts)
4 servers, each with: 16 vCPUs, 32GB RAM and 250GB SSD
5 servers, each with: 16 vCPUs, 32GB RAM and 250GB SSD
Enterprise (>100 hosts)
5 servers, each with: 16 vCPUs, 32GB RAM and 250GB SSD 1 additional server for every 100 Hypervisors
6 servers, each with: 24 vCPUs, 32GB RAM and 250GB SSD 1 additional server for every 50 Hypervisors
The above recommendation is for a single Management Plane region. For every extra region that needs to be deployed on the same Management Cluster, the capacity should be increased accordingly. It is recommended to have a separate management cluster in each geographical location, to avoid performance degradation and a single point of failure.
Disk Partition Guidance
Private Cloud Director normally runs on a single root filesystem. If your environment requires dedicated partitions for compliance or security, size the directories that Private Cloud Director depends on: /var, /opt, and /etc. These directories hold logs, container data, PF9 components, and configuration files, so they must have enough space to support normal operations and upgrades.
Recommended Sizes for /var, /opt, and /etc
Directory
Recommended
Minimum
/var
140GB
80GB
/opt
30GB
10GB
/etc
2GB
1GB
Server Configuration
Each physical server that you use to run as part of the management cluster should meet the following requirements:
Operating System: Ubuntu 22.04, Ubuntu 24.04
Swap config:
Make sure that each server has swap disabled. You can run the following command to do this.
swapoff -aThe above change will not survive a reboot; hence, it is recommended to update the /etc/fstab file and comment out the line that has the entry for the swap partition. e.g.
UUID=aabbcc / ext4 errors=remount-ro 0 1
UUID=xxyyzz /home ext4 defaults 0 2
UUID=mswmsw /media/windows ntfs defaults 0 0
#/dev/sdb1 none swap sw 0 0 <--- comment out the lineIPv6 support:
Ensure the below sysctl setting is set to 0, so that IPv6 support is enabled on the server.
sysctl net.ipv6.conf.all.disable_ipv6
# If currently set to 1, change it to 0 as below:
echo net.ipv6.conf.all.disable_ipv6=0 >> /etc/sysctl.conf
sysctl -pPasswordless Sudo:
Many operations require sudo access (for example, installing Yum repositories, Docker, etc.). Please ensure that your server has passwordless sudo enabled.
Kernel Panic Option
Update the server configuration section to include a step for setting kernel.panic=10
echo "kernel.panic=10" >> /etc/sysctl.conf && sysctl -pSSH Keys:
We rely on SSH to log in to the management cluster hosts and to install various components and manage them.
Please generate ssh keys and sync them across all hosts of the management cluster.
We recommend generating the key pair on one host and then adding the public key to all other hosts in their
~/.ssh/authorized_keysfile. This will enable every host in the management cluster to ssh into every other host.
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh-copy-id -i ~/.ssh/id_rsa.pub root@test-3Package Updates:
Install
cgroup-tools:
apt-get update -y && apt-get install cgroup-tools -yDownload and Update OpenSSL Version to 3.0.7 for Ubuntu 22.04:
export AGENT_KEY=<YOUR_USER_AGENT_KEY>
# Download the OpenSSL package
curl --user-agent "${AGENT_KEY}" https://pf9-airctl.s3-accelerate.amazonaws.com/openssl-smcp-ubuntu/openssl_3.0.7-1_amd64.deb --output /tmp/openssl_3.0.7-1_amd64.deb
# Verify the MD5 checksum
md5sum /tmp/openssl_3.0.7-1_amd64.deb | grep 706caf || { echo "MD5 checksum does not match, exiting." && exit 1; }
# Install the OpenSSL package
sudo dpkg -i /tmp/openssl_3.0.7-1_amd64.deb || { echo "Failed to install OpenSSL, exiting." && exit 1; }
echo "/usr/local/ssl/lib64" | sudo tee /etc/ld.so.conf.d/openssl-3.0.7.conf
sudo ldconfig -v
# Create a symbolic link to the OpenSSL binary
sudo ln -sf /usr/local/ssl/bin/openssl /usr/bin/openssl
# Verify the OpenSSL version
openssl version | grep 3.0.7 || { echo "OpenSSL version does not match, exiting." && exit 1; }User Agent Key For Installation
You will need a specific Platform9 user agent key for the installation of your self-hosted management plane. Your Platform9 sales engineer will share the key with you prior to the install.
Networking
You will need 2 virtual IPs that are on the same L2 domain as the hosts in the management cluster.
VIP #1: This is the IP where you can access the Private Cloud Director management plane UI.
VIP #2: This is used to serve the management Kubernetes cluster's API server.
Storage
For a production setup of Self-hosted Private Cloud Director you will need a Kubernetes Container Storage Interface (CSI) compatible storage for persisting the state of the management cluster. To know more, see CSI and Kubernetes Storage.
The Terrakube component of PCD AppCatalog requires persistent storage with multiple access (RWX) for sharing among Terrakube pods. Ensure a compatible RWX storage solution (like NFS, CephFS, or any CSI-compliant RWX provider) is available and configured in the Kubernetes cluster where Terrakube runs.
Storage Class Customisation
Customers have the flexibility to utilize a custom storage provisioner, allowing them to modify the storage class and disk size for each component according to their specific deployment needs. This section is optional; feel free to skip it if the default configuration meets your requirements.
Default Storage Classes
RabbitMQ
pcd-sc
NFS type storage
MySQL
Cluster default (hostpath-csi)
OVN OVSDB NB
pcd-sc
NFS type storage
OVN OVSDB SB
pcd-sc
NFS type storage
Prometheus
pcd-sc
NFS type storage
Terrakube
pcd-sc
Requires RWX access mode
Terrakube Redis
Cluster default (hostpath-csi)
Grafana
Cluster default (hostpath-csi)
To use the custom storage class provisioner, please follow these steps:
Place your custom storage class YAML files in the directory located at
/opt/pf9/airctl/conf/ddu/storage/custom/. It is essential to have a storage class namedpcd-scincluded along with any other YAML files in this custom path.Utilize
-por--storage customin theairctl configurecommand. This will automatically apply all the YAML files found in the custom path.
Example storage class yaml files:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <STORAGE_CLASS_NAME>
mountOptions:
- nfsvers=4.1
- nolock
parameters:
server: <NFS_SERVER_IP>
share: <NFS_SHARE_PATH>
provisioner: nfs.csi.k8s.io
reclaimPolicy: Delete
volumeBindingMode: ImmediateapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <STORAGE_CLASS_NAME>
parameters:
storagePool: standard
provisioner: kubevirt.io.hostpath-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumerOverride storage class and disk size for specific component:
Edit the /opt/pf9/airctl/conf/options.json file to configure the settings specifically related to storage parameters. This adjustment empowers you to customize the storage class and disk size to meet your precise requirements.
Ensure that the storage class overrides specified in options.json are available in the custom storage path. If the storage class is not present, the deployment will fail.
# Values here are given for example and not actual values.
{
"chart_url": "<chart_url>",
"terrakube_sc": "efs-sc",
"terrakuberedis_sc": "block-sc",
"rabbitmq_sc": "block-sc",
"mysql_sc": "block-sc",
"ovn_ovsdb_nb_sc": "efs-sc",
"ovn_ovsdb_sb_sc": "efs-sc",
"grafana_sc": "block-sc",
"prometheusopenstack_sc": "efs-sc",
"terrakube_disk_size": "11Gi",
"terrakuberedis_disk_size": "2048Mi",
"rabbitmq_disk_size": "6144Mi",
"mysql_disk_size": "10Gi",
"ovn_ovsdb_nb_disk_size": "11Gi",
"ovn_ovsdb_sb_disk_size": "12Gi",
"grafana_disk_size": "9Gi",
"prometheusopenstack_disk_size": "9Gi"
}Note:
Disk size units can be specified as Mi, Gi, or Ti.
Decreasing disk size is not supported.
Changing storage class name is supported only during installation, not during upgrade.
Last updated
Was this helpful?
