April 2025 Release
This release of Platform9 Private Cloud Director comes with several feature updates, enhancements, and bug fixes.
New Features and Enhancements
Community Edition
This release comes with an updated version of Private Cloud Director: Community Edition. We've significantly updated and improved the community edition install experience — making it easy to install on a single Ubuntu machine. Give it a spin!
Multiple Cluster Support
You can now manage and operate multiple virtualized clusters from a single Private Cloud Directorinstance. This enables better isolation, scalability, and flexibility for large or multi-tenant environments. Each cluster can have a dedicated VMHA and DRR configuration, allowing you to scale and manage workloads more efficiently.
Blueprint Changes Propagated to Existing Hosts
Changes to the cluster blueprint for adding network interfaces or persistent storage backend configurations now propagate to existing hosts already part of a cluster.
Better VM Scheduling with Dynamic Resource Rebalancing (DRR)
DRR now uses enhanced logic and metadata filters to schedule VMs only on compatible hosts, improving placement reliability.
Upgraded Prometheus based Monitoring Stack
We've upgraded the Private Cloud Director monitoring component to use Prometheus, Alert Manager, and Grafana, providing better performance and real-time visibility into metrics across all keyPrivate Cloud Director components and objects.
The PCD UI now provides access to the built-in Grafana UI, available on the PCD UI home screen. The default credentials to log in are admin/admin. You can change the password on your first login.
New pcdctl CLI
This release includes a brand-new command-line interface for Private Cloud Director. We have combined the Private Cloud Director cloud-ctl and OpenStack CLI into a single CLI pcdctl. So you can now run all Private Cloud Director commands using a single CLI.
Maintenance Mode for Hosts
You can now perform host maintenance by enabling maintenance mode via the UI. This capability migrates all currently running VMs from the current host to another host and marks the host as unschedulable for future VM placements. You can then perform host maintenance operations such as security patching, operating system upgrades, etc.
Flavorless VM Creation
You can now create new VMs without requiring a flavor to specify the CPU and Memory configuration for the VM. We've added a special dynamic flavor with 0 vCPU and 0 RAM. If you choose that flavor for VM creation, you can specify explicit values for CPU and Memory. This feature reduces the need to create a unique flavor for every new VM that users want to create.
Hot Add CPU and Memory
The support for dynamic flavor also allows you to add vCPU and RAM dynamically through the VM hot add option without requiring a power cycle of the VM. This feature simplifies scaling and customization for virtual machines.
Rescue Mode with Custom Images
Recover inaccessible VMs using your custom rescue image via the UI. This streamlines troubleshooting and speeds up recovery workflows.
Load Balancer and DNS as-a-Service Management in UI
You can now view, configure, and manage Load Balancer and DNS as a service resource directly through the UI. This simplifies traffic routing and DNS zone management and improves overall service observability.
Service Health Page
A new Service Health page has been added to help administrators monitor the real-time status of core platform services, e.g., compute and image services. This view provides quick visibility into service availability across hosts, simplifying troubleshooting and improving operational awareness.
Improved Image Upload Experience
In this release, we have significantly updated our image upload experience. You can now upload images up to 10 GB in size directly from the UI, with clear progress indicators and estimated upload completion times.
High Availability for Image Library
Starting this release, you can enable the Image Library role on multiple hosts in a region. If the image storage is shared or uses a volume-type backend, the images on it can be accessed via any of the online Image Library hosts for VM or volume creation.
High Availability for Persistent Storage Service
Self-hosted Private Cloud Director now supports failover of the volume management service to other nodes in the cluster with storage roles configured. This feature requires a shared volume type, such as NFS, and that type must be applied to all storage nodes.
Automatic Backup for Self-Hosted Private Cloud Director
Self-hosted Private Cloud Director clusters now benefit from automated backups using S3-compatible storage, making disaster recovery faster and more reliable.
Combined User Experience for Virtualized and Kubernetes Clusters
We have now combined the user interface for Virtualized and Kubernetes clusters. You can manage your Virtualized and Kubernetes clusters using a single combined UI view.
Physical Node Cluster Support for Kubernetes
You can now create Kubernetes clusters using bare metal hosts in addition to VMs. This feature is currently only supported via CLI.
Upgrade Notes
With this release, which introduces Multiple Cluster Support, VM High Availability (VMHA) and Dynamic Resource Rebalancing (DRR) are now configured per cluster and can no longer be managed via the Cluster Blueprint.
After the upgrade
Create new cluster and explicitly re-enable VMHA and DRR at the individual cluster level as needed.
Existing hosts must be added to one of these clusters to continue leveraging VMHA and DRR features.
Improvements and Bug Fixes
Self-Hosted
Nodelet Service Starts on Scaled-Up Nodes: Nodes added via scale-up operation now correctly start the
pf9-nodeletdservice, avoiding post-scale configuration issues.Logrotate Restored on Scaled Management Clusters: logrotate jobs are now correctly applied on new nodes after scaling management clusters, preventing disk space issues.
Percona PXC Operator Upgraded with airctl: The
airctl upgradecommand now also upgrades the Percona PXC operator along with the database pods, ensuring improved stability.MySQL 8 Support: MySQL 8 is now supported across SaaS and on-prem environments. This replaces MySQL 5.7, which has reached end-of-life and continues to use Percona as the MySQL provider for on-prem deployments.
S3-Based Backup and Restore for On-Prem: On-premises clusters can now use S3-compatible storage for automatic backups and restores, making disaster recovery faster and more reliable.
Multi-Replica Deployment for Stateless Services: Stateless components, such as API services, can now be deployed with multiple replicas, improving availability during host failures or restarts.
Kubernetes 1.32 Support: Kubernetes 1.32 clusters are now available for both PCD virtualization-based clusters and Bring Your Own Host (BYOH) clusters.
UI
Custom MTU Configuration in UI: You can now set and manage MTU values per network directly in the UI, helping fine-tune performance for specific workloads.
Custom Image Support for Console and Rescue Mode: Users can launch the VM console or boot into rescue mode using their own image directly via the UI, making recovery and debugging easier.
Volume Rename and Transfer Support: You can now rename or transfer volumes directly to different tenants in the UI, improving storage flexibility and control.
Create and Manage User SSH Keys: Admins can now create and manage SSH key pairs for all Private Cloud Director users.
Flavor-Aware Host Filtering for Migrations: Migrations now automatically skip incompatible hosts based on flavor metadata, reducing failures during live VM moves.
Host De-authorization from UI: You can now deauthorize hosts directly from the UI’s Hosts tab, simplifying infrastructure cleanup and decommissioning.
Live Migration Feedback Improved: The UI now shows clear success or failure messages for VM live migrations, along with error details, if any.
Identity, Storage, Networking Service
Use of JWT tokens for PCD Identity Service: PCD Identity Service now uses JWT tokens for a more efficient authorization process.
Custom Storage Provider Automation: On-prem installs can now automate storage backend setup, enabling easier integration with local storage solutions.
Simplified Port Security Disabling: Users no longer need to uncheck all selected security groups before disabling port security, improving the user experience.
Host Aggregate Allocation Ratio Support in UI: You can now define CPU and RAM allocation ratios per host aggregate in the UI, enabling better management of resource oversubscription.
Known Issues
PCD cluster name cannot contain the underscore character today.
The availability zone name is modified to the default cluster name from the Feb release. For existing hypervisors from the previous self-hosted install, please disable VM HA in the blueprint first, then, after upgrading to this release, manually update the AZ name.
SSO users are unable to create Heat orchestration stacks at this time.
cloud-ctl config setcommand is not supported for a user with MFA enabled.VMs with vTPM may enter an error state at times after host reboots. You can workaround this issue by performing a hard reboot of the VM.
If VM resize fails due to disk permission error, you can resolve this issue by performing a rescue and un-rescue operation on the VM.
Currently, rescue mode is only supported for VMs with ephemeral storage. The rescue operation does not work for instances backed by volumes. Users attempting to rescue a volume-backed instance will encounter failures.
VM HA and DRR do not support vTPM-enabled VMs. Live migration and evacuation are not possible, so these VMs will not be migrated automatically.
PCD metrics data is not automatically backed up in self-hosted PCD deployments. Administrators must manually copy metrics data from the
pcd-scpersistent volume in a disaster recovery scenario.Node running
airctlfor Self-Hosted mode does not support automated recovery, as some files on the node are needed for running the management plane.Restoring the self-host management plane to a different management cluster with a management plane or cluster VIP that differs from the original cluster fails when using the HostPath CSI driver.
A Kubernetes cluster with the same name as a recently deleted cluster may encounter TLS certificate validation errors when trying to access the API server.
Last updated
Was this helpful?
