Install

This guide outlines the steps for a self-hosted deployment of Private Cloud Director. Before installing, refer to the Pre-requisites section to ensure all required prerequisites are met.

Concepts

Management Cluster

As part of the installation process, the Self-Hosted version of Private Cloud Director creates a Kubernetes cluster using the physical servers that you use to deploy it on. We refer to this cluster as the management cluster. The Private Cloud Director management plane then runs as a set of Kubernetes pods and services on this management cluster.

Nodelet

Nodelet is a software agent that is installed and run on each management cluster node as a component of Self-hosted Private Cloud Director . The nodelet agent is responsible for functions such as installation and configuration of multiple Kubernetes services including etcd, containerd, docker, networking, webhooks etc.

Infra Region vs Workload Regions

Read Tenant to understand the concepts of regions and infra region in Private Cloud Director.

circle-info

Info

Any commands below that reference YOUR-USER-AGENT-KEY refer to the user agent key that you must request from Platform9, as specified in the Pre-requisites.

Download Installer

airctl is the command-line installer for Self-hosted Private Cloud Director. Run the following commands only on one of the management cluster hosts to download airctl along with the required installer artifacts.

Step 1: Download the Installer Script

Run the following command to download the installer script and required artifacts:

curl --user-agent "<YOUR-USER-AGENT-KEY>" https://pf9-airctl.s3-accelerate.amazonaws.com/latest/index.txt | awk '{print "curl -sS --user-agent \"<YOUR-USER-AGENT-KEY>\" \"https://pf9-airctl.s3-accelerate.amazonaws.com/latest/" $NF "\" -o ${HOME}/" $NF}' | bash

Step 2: Make the Installer Executable

Set the execute permissions to the installation script.

Step 3: Run the Installation Script

Execute the installer with the specified version. This runs the installer using the version number found in version.txt.

Step 4: Add airctl to System Path

Add airctl to the system path to use it globally:

Configure airctl

Run the following command to generate a configuration file, which will be used to deploy the Self-hosted Private Cloud Director management cluster.

You can choose between a single-master or multi-master management cluster, depending on your installation type (POC or production).

Following are the input parameters for the command:

  1. Space separated list of master node IPs - Specify the list of IP addresses for the master nodes you'd like to use for the management cluster. We recommend minimum 3 master nodes for a production environment.

  2. Space separated list of worker node IPs (optional) - If you'd like to add worker nodes to the management cluster, then specify the list of IP addresses for worker nodes here. This parameter is optional. If left empty, the master nodes will be used as workers when deploying the Private Cloud Director management plane.

  3. VIP for management cluster - Specify the Virtual IP to be used for the management cluster. This will be used to serve the management Kubernetes cluster's API server.

  4. Master VIP Interface - Specify the name of the network interface to be used for the Virtual IP of master nodes. Note that each master node must have it's default network interface named with this name.

  5. Master VIP Vrouter ID - This is optional. If unspecified, one will randomly be generated and can be found in the updated cluster spec saved in the directory that contains the state of the management server (see 'Important Directories' section for directory location). It is recommended to specify one if you plan to deploy multiple Kubernetes clusters in the same VLAN to avoid collision.

  6. Management Plane FQDN - Specify the base FQDN you would like to use for Private Cloud Director. For eg pcd.mycompany.com

  7. Regions - Specify one or more region names as a comma-separated list for regions you would like to create in your Private Cloud Director setup. The final FQDN for your Private Cloud Director deployment will use a combination of your base FQDN and your region name. For example, if your base FQDN is pcd.mycompany.com and you specified a single region name here as region1, the final FQDN for your deployment will be pcd-region1.mycompany.com.

  8. Storage Provider -

    1. Specify the CSI storage provider that should be used to store any persistent state for the management cluster. If not specified, custom storage provider option will be selected as default.

    2. For custom as the storage provider, copy provider specific CSI yaml files to /opt/pf9/airctl/conf/ddu/storage/custom/ locally. Airctl will configure the dependencies reading the yaml files at this path and use the provided storage class for provisioning volumes for Private Cloud Director components. The StorageClass Kubernetes Custom Resource supplied here will be renamed to be called pcd-sc and set as the default storage class on your management cluster.

    3. Alternatively, you can create the storage provider resources and storage class out of band prior to running airctl start on the management cluster. Ensure that the storage class is named pcd-sc in this case. If this is done, Airctl will skip reading the yaml files at custom path. We also recommend creating a test pod with a persistent volume to ensure correct connectivity and configuration is in place first.

    4. For non-production environments and where custom storage provider is not available/required, select hostpath-provisioner.

  9. VIP for management plane - Specify the VIP to be used for the management plane here.

  10. NFS Details -

    1. If using hostpath-provisioner for storing Gnocchi metrics data, NFS details must be provided.

    2. You should have an NFS server pre-configured before selecting this option.

    3. If using a custom storage provider, ensure a storage class (e.g., pcd-sc) is pre-created before deployment.

Alternatively, you can also pass the required configuration parameters via the command line. Example:

circle-info

Info

If you plan to deploy a single-master management cluster (not recommended for production), enter the same master node IP for both VIP for management cluster and VIP for management plane fields above.

This command generates two configuration file templates:

  • /opt/pf9/airctl/conf/nodelet-bootstrap-config.yaml – Contains the configuration required to bootstrap the management cluster.

  • /opt/pf9/airctl/conf/airctl-config.yaml – Contains the configuration for the management plane.

To avoid passing the configuration file as a command-line option when running airctl commands, copy /opt/pf9/airctl/conf/airctl-config.yaml to your $HOME directory.

Proxy Configuration (Optional)

If your environment uses a network proxy, set the required values in the /opt/pf9/airctl/conf/helm_values/kplane.template.yml file as shown below:

Also, to ensure that containerd honors the proxy values and allows the creation of the Private Cloud Director management cluster, update the proxy values on all management plane nodes as shown below:

Deploy Management Cluster

Now, we are ready to create the management cluster that will host the Self-hosted Private Cloud Director management plane.

Step 1: Run Pre-Checks

Before creating the management cluster, run the following command to perform pre-checks and resolve any issues identified:

Step 2: Deploy the Kubernetes Cluster

Run the following command to deploy the Kubernetes cluster:

Step 3: Validate Cluster Health

Once the cluster is created, verify that it is functioning properly and that all nodes are healthy:

circle-info

Info

Please refer to /var/log/pf9/nodelet.log for troubleshooting any issues with the management cluster creation.


Install Management Plane

Now that the management cluster is created, run the following commands to install and configure the Private Cloud Director self-hosted management plane.

circle-info

Info

It may take up to 45 minutes for all services to be deployed.

Monitor Deployment Progress

You can track the progress of the management plane deployment by checking the logs of the du-install pod as a root user:

Alternatively, to check specific pods:

circle-info

Info

Please refer to airctl-logs/airctl.log for logs in case of any issues with airctl start command.

Check Management Plane & Region Status

To verify the status of the management plane and its regions, run:

Obtain UI Credentials

Once the installation is complete, you can retrieve the credentials for the Private Cloud Director UI by running the following command:.

circle-info

DNS Considerations

If you do not have a working internal DNS that resolves the management plane FQDN to its IP address, you must manually update the /etc/hosts file on your local machine and any new host you want to add to the management cluster.

Use the following command to check if the necessary entry exists:

After updating the hosts file, open your web browser and log into the UI using the provided credentials. Then, follow the steps in the Getting Started guide to configure your hosts and set up your Private Cloud Director environment.

Important Files & Directories

The following directories contain various log and state files related to the Private Cloud Director self-hosted deployment.

Directories**:**

  • /opt/pf9/airctl – Contains all binaries, offline installers, Docker image tar files, miscellaneous scripts, and configuration files for airctl.

  • /opt/pf9/pf9-kube – Managed by Nodelet. Stores binaries and scripts used to manage the management cluster.

  • /etc/nodelet – Contains configuration files for Nodelet and certificates generated by it.

  • ~/airctl-logs/ – Stores all logs related to the deployment.

Files**:**

  • /opt/pf9/airctl/conf/airctl-config.yaml – Contains configuration information for the management plane.

  • /opt/pf9/airctl/conf/nodelet-bootstrap-config.yaml – Stores configuration required to bootstrap the management cluster.

  • /var/log/pf9/nodelet.log – Log file for troubleshooting issues with management cluster creation.

  • ~/airctl-logs/airctl.log – Log file for troubleshooting issues with management plane creation.

Last updated

Was this helpful?