Pre-requisites

This document outlines the prerequisites for deploying the Self-Hosted version of Private Cloud Director.

The following table provides the recommended number of servers you should allocate to run your Self-Hosted Private Cloud Director Instance.

Management Cluster

As part of the installation process, the Self-Hosted version of Private Cloud Director creates a Kubernetes cluster using the physical servers that you use to deploy it on. We refer to this cluster as the management cluster. The Private Cloud Director management plane then runs as a set of Kubernetes pods and services on this management cluster.

Single-node deployments are currently not supported for Self-Hosted Private Cloud Director. The minimum supported configuration requires 3 servers to ensure high availability and proper operation of the Kubernetes management cluster. For development or testing purposes, contact your Platform9 support for alternative deployment options.

The following is the recommended capacity for the management cluster, based on the projected scale of your Private Cloud Director deployment. These configurations assume production deployments with high availability requirements.

Hypervisors You Plan to Use

Minimum Management Cluster Capacity

Recommended Management Cluster Capacity

Small (<20 hosts)

3 servers, each with: 14 vCPUs, 28GB RAM and 250GB SSD

4 servers, each with: 16 vCPUs, 32GB RAM and 250GB SSD

Growth (<100 hosts)

4 servers, each with: 16 vCPUs, 32GB RAM and 250GB SSD

5 servers, each with: 16 vCPUs, 32GB RAM and 250GB SSD

Enterprise (>100 hosts)

5 servers, each with: 16 vCPUs, 32GB RAM and 250GB SSD 1 additional server for every 100 Hypervisors

6 servers, each with: 24 vCPUs, 32GB RAM and 250GB SSD 1 additional server for every 50 Hypervisors

The above recommendation is for a single Management Plane region. For every extra region that needs to be deployed on the same Management Cluster, the capacity should be increased accordingly. It is recommended to have a separate management cluster in each geographical location, to avoid performance degradation and a single point of failure.

Server Configuration

Each physical server that you use to run as part of the management cluster should meet the following requirements:

Operating System: Ubuntu 22.04, Ubuntu 24.04

Swap config:

Make sure that each server has swap disabled. You can run the following command to do this.

The above change will not survive a reboot; hence, it is recommended to update the /etc/fstab file and comment out the line that has the entry for the swap partition. e.g.

IPv6 support:

Ensure the below sysctl setting is set to 0, so that IPv6 support is enabled on the server.

Passwordless Sudo:

Many operations require sudo access (for example, installing Yum repositories, Docker, etc.). Please ensure that your server has passwordless sudo enabled.

Kernel Panic Option

Update the server configuration section to include a step for setting kernel.panic=10

SSH Keys:

  • We rely on SSH to log in to the management cluster hosts and to install various components and manage them.

  • Please generate ssh keys and sync them across all hosts of the management cluster.

  • We recommend generating the key pair on one host and then adding the public key to all other hosts in their ~/.ssh/authorized_keys file. This will enable every host in the management cluster to ssh into every other host.

Package Updates:

  • Install cgroup-tools :

  • Download and Update OpenSSL Version to 3.0.7 for Ubuntu 22.04:

  • User Agent Key For Installation

You will need a specific Platform9 user agent key for the installation of your self-hosted management plane. Your Platform9 sales engineer will share the key with you prior to the install.

Networking

You will need 2 virtual IPs that are on the same L2 domain as the hosts in the management cluster.

  • VIP #1: This is the IP where you can access the Private Cloud Director management plane UI.

  • VIP #2: This is used to serve the management Kubernetes cluster's API server.

Storage

For a production setup of Self-hosted Private Cloud Director you will need a Kubernetes Container Storage Interface (CSI) compatible storage for persisting the state of the management cluster. To know more, see CSI and Kubernetes Storagearrow-up-right.

The Terrakube component of PCD AppCatalog requires a persistent storage class that supports ReadWriteMany (RWX) access mode. This ensures that multiple Terrakube pods can share the same storage for consistent state management and job execution. An RWX-capable storage backend (such as NFS, CephFS, or any CSI-compliant RWX provider) must be available and configured in the target Kubernetes cluster where Terrakube runs.

Last updated

Was this helpful?