Community Edition

circle-info

Info

For a beginner-friendly guide to getting started with Private Cloud Director Community Edition, check out the Tutorials section: Beginner’s Guide to Deploying PCD Community Edition

What Is Community Edition?

Private Cloud Director Community Edition is a free, community-supported, and full-featured version of Private Cloud Director. It's free forever and comes with Platform9 community support, currently available on Reddit at https://www.reddit.com/r/platform9/arrow-up-right.

Community Edition delivers the same core functionality as our commercial version of Private Cloud Director, with the only differences being deployment models:

  1. SaaS-Managed: In this model, the infrastructure region of Private Cloud Director is managed by Platform9, while the workload region is managed by the customer. This approach simplifies infrastructure management for customers, allowing them to focus on their workloads.

  2. Self-Hosted: Both the infrastructure and workload regions are managed by the customer. The infrastructure region requires multiple servers to ensure high availability, making it suitable for organizations with robust IT resources and expertise.

  3. Community Edition: Designed for simplicity, Community Edition installs both the infrastructure and workload regions on a single server. It supports being deployed on either bare-metal or as a virtual machine, offering an accessible option for smaller-scale or experimental use cases.

Note: The 2025.4 release of Community Edition does not support Private Cloud Director - Kubernetes workloads. Additionally, Dynamic Resource Rebalancing (DRR) is not supported. Both of these features are planned for a future release.

Pre-requisites

The Community Edition host and hypervisor hosts can be any combination of bare-metal or virtual machines. Community Edition 2025.4 has been tested with the Ubuntu 22.04arrow-up-right AMD64 cloud image. A full server distribution is not required, and a minimal distribution is not supported.

Hypervisor hosts deployed as virtual machines must have virtualization support available inside the VM. Virtual machines on ARM CPUs are currently untested.

If you want to verify that nested virtualization is working in a VM, you can check for virtualization support inside the VM by running:

egrep "svm|vmx" /proc/cpuinfo

Community Edition management host's requirements:

  • Minimum of Ubuntu 22.04 arrow-up-rightAMD64 cloud image

  • 16 CPUs suggested, minimum 12 CPUs (see note).

  • 32GB RAM required.

  • 100GB local storage required.

  • The Community Edition install script must be run as the root user.

Note: 12 vCPUs is only attainable by removing completed pods throughout the deployment process.

In order to create virtual machines, at least one hypervisor host must be available. Hypervisors will be authorized to run virtual machines managed by Community Edition.

In order to create virtual machines with persistent volumes, backend storage such as NFS must be available. The amount of storage necessary depends on the number and size of persistent volumes. Ephemeral VMs are stored locally on the hypervisor host.

Deploy Community Edition host

Private Cloud Director Community Edition (CE) is deployed using a script run as root.

Note: If you are not logged in as a root user, you can switch using sudo su - and then entering the password for the current user.

The install will show its progress, and when completed it will present the fully qualified domain name of the workload region and administrator credentials.

circle-info

Tip

If you accidentally leave the screen above before copying the login credentials, you can view the credentials again by running the following command: airctl get-creds --config /opt/pf9/airctl/conf/airctl-config.yaml

Local DNS Entries

If DNS is not available, add a DNS entry for the fully qualified domain names of the infrastructure & workload regions on a local machine, which will allow access to the Community Edition user interface using the domain name.

For Linux or Apple machines, use the following commands to add the DNS entries.

Example:

Log In To The User Interface

Navigate to pcd-community.pf9.ioarrow-up-right in a web browser from the same machine with the manual DNS entries. Community Edition uses self-signed certificates, which will need to be accepted when logging in for the first time.

Keep default as the Domain, choose "Use local credentials" at the top right, and login with the credentials provided when the Community Edition install completed. Note: SSO login isn't available until it has been configured post-installation. If you receive a 404 error while attempting an SSO login before it is configured, that is expected behavior.

At this point, you are ready to start using Private Cloud Director.

Private Cloud Director Community Edition local credentials login screen
Private Cloud Director Community Edition local credentials login screen

Changing Administrator Email & Password

It is not recommended to change the administrator email & unique password that is provided at the end of the Community Edition install, as these are used internally between the infrastructure & workload regions. Instead, please create a new user with administrator access.

Create Cluster Blueprint & Onboard Hypervisors

After Private Cloud Director Community Edition is installed, the next steps are to create a cluster blueprint, and then begin onboarding hypervisors.

  1. Follow Virtualized Cluster Blueprint article for more information on what cluster blueprints are and how to create one.

Once the cluster blueprint is created, onboard hypervisor hosts to the cluster and assign roles to them based on the blueprint you setup above.

After a host is onboarded, create at least one Networks And Ports or Physical Network for virtual machine connectivity.

Create Virtual Machines

  1. Read Virtual Machines for steps to provision the first virtual machine on your cluster.

YouTube Playlist

Check out Platform9's YouTubearrow-up-right channel for a Private Cloud Director playlistarrow-up-right.

Log file locations & service names

Community Edition log files

Log files for hypervisor hosts can be found in the /var/log & /var/log/pf9 directories.

Logs for Community Edition can be found at /var/log/pf9/fluentbit/ddu.log This file is structured as JSON and contains Kubernetes pod logs.

Hypervisor host services

Platform9 hypervisor host service names begin with pf9 .

Examples:

  • pf9-hostagent

  • pf9-imagelibrary

  • pf9-ostackhost

Troubleshooting

Issue: The CE install fails.

Answer: This is usually due to CPU or RAM restraints on the CE host or a curl failure, however the following steps are generally helpful in determining the reasons why a failure occurred. The installation process first downloads all of the binaries, installs & creates the K3s cluster, and then orchestrates the installation of the pcd (infrastructure region) and pcd-community (workload region) namespaces. First, check the install logs, then the install pod logs in the pcd-kplane namespace.

  • Check the install logs at airctl-logs/airctl.log

  • kubectl describe node The allocated resources block should show requests CPU and memory near or at 100%, if resources are constrained.

  • kubectl get pods -n pcd-kplane if the node resources are maxed out, either the du-install-pcd-<unique ID> pod or the du-install-pcd-community-<unique ID> pod will likely be in a Running or error state. A Completed state means that the pod completed successfully.

  • kubectl logs du-install-pcd-<unique ID> -n pcd-kplane to view the logs of that pod.

    • If the curl command is timed out in this pod, the pod is likely encountering an issue with DNS resolution inside the pod and/or CE is being installed in the 192.168.0.0/16 range, per the known issue listed above.

  • kubectl logs du-install-pcd-community-<unique ID> -n pcd-kplane to view the logs of that pod.

Issue: The initial curl command does not work.

Answer: This could be related to network restrictions such as firewalls or lack of internet access. Using curl with verbose mode enabled -v can help provide more information.

Recovering a failed installation

Situation: A Community Edition installation has failed, the issue has been rectified, and you'd like to restart the installation.

Resolution: You must first remove the CE installation, restart it, and then retrieve the administrator credentials as they are uniquely generated with each installation. Note: This doesn't restart the Kubernetes cluster installation.

Remove:

Restart:

Retrieve administrator credentials:

After these steps have completed successfully, you may complete the rest of the deployment process starting with the "Local DNS Entries" section on this page. Note: This process only removes & restarts the CE installation, it does not re-install Kubernetes.

Validating an installation

You may validate an installation of Community Edition using the following command. Your deployment is healthy if both deployment details show a ready task state and the number of ready services matches the number of desired services.

An example of a healthy installation:

Uninstalling Kubernetes

Private Cloud Director Community Edition installs K3s as part of the installation process. To uninstall K3s, run the following commands as the root user.

Expanding a Ubuntu logical volume

If you are using LVM (logical volume management) and find that the logical volume size is smaller than the physical volume size, you can expand the logical volume to the size of the physical volume.

First, find the name of the filesystem path that the root of the filesystem (/)is mapped to with df -h. Then use the following example and run the commands using the correct path in your installation.

Example: If the logical volume is named ubuntu--vg-ubuntu--lv, first resize the logical volume:

sudo lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

And then, resize the filesystem to match the logical volume:

sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

Known bugs

Problem: Persistent storage passwords aren't decrypted properly by the Cinder driver, causing issues on hosts with persistent storage configured. This will be fixed in the next release.

Workaround: On each hypervisor host, store the plain-text storage passwords in /opt/pf9/etc/pf9-cindervolume-base/conf.d/secret_mapping_override.conf & restart the pf9-cindervolume-base service with systemctl restart pf9-cindervolume-base . If the override file doesn't exist, copy secret_mapping.conf to secret_mapping_override.conf , and replace any passwords for the persistent storage configuration with their plain-text equivalents.

Example secret_mapping_override.conf :

Set file permissions and ownership with the following:

Last updated

Was this helpful?