April 2025 Release

This release of Platform9 Private Cloud Director comes with several feature updates, enhancements, and bug fixes.

New Features & Enhancements

Community Edition

This release comes with an updated version of Community Edition. We've significantly updated and improved the community edition install experience - making it easy to install on a single Ubuntu machine. Give it a spin!

Multiple Cluster Support

You can now manage and operate multiple virtualized clusters from a single Private Cloud Director instance. This enables better isolation, scalability, and flexibility for large or multi-tenant environments. Each cluster can have dedicated VMHA and DRR configuration, allowing you to scale and manage workloads more efficiently.

Blueprint Changes Propagated to Existing Hosts

Changes made to cluster blueprint for network interface addition or persistent storage backend configuration changes now get propagated to existing hosts that are already part of a cluster.

Better VM Scheduling with Dynamic Resource Rebalancing (DRR)

DRR now uses enhanced logic and metadata filters to schedule VMs only on compatible hosts, improving placement reliability.

Upgraded Prometheus based Monitoring Stack

We've upgraded the Private Cloud Director monitoring component to use Prometheus, Alert Manager, and Grafana, providing better performance and real-time visibility into metrics across all key Private Cloud Director components and objects.

The PCD UI now provides access to the built-in Grafana UI, available on PCD UI home screen. The default credentials to login are admin/admin. You can change the password on first login.

New pcdctl CLI

This release comes with a brand new command line interface for Private Cloud Director. We have combined the Private Cloud Director cloud-ctl and OpenStack CLI into a single CLI pcdctl. So you can now run all Private Cloud Director commands using a single CLI.

Maintenance Mode for Hosts

You can now perform host maintenance by enabling maintenance mode via the UI. This capability migrates all currently running VMs from current host to a different host, and marks the host as unschedulable for future VM placements. You can then perform host maintenance operations such as security patching, operating system upgrades etc.

Flavorless VM Creation

You can now create new VMs without requiring a flavor to specify the CPU and Memory configuration for the VM. We've added a special dynamic flavor that is set to 0 vCPU and 0 RAM. If you choose that flavor for VM creation, then you can specify explicit values for CPU and Memory during the VM creation time. This feature reduces the need to create a unique flavor for every new VM that users would like to create.

Hot Add CPU and Memory

The support for dynamic flavor also allows you to add vCPU and RAM dynamically through the VM hot add option without requiring to power cycle the VM. This feature simplifies scaling and customization for virtual machines.

Rescue Mode with Custom Images

Recover inaccessible VMs using your custom rescue image via the UI. This streamlines troubleshooting and speeds up recovery workflows.

Load Balancer and DNS as-a-Service Management in UI

You can now view, configure, and manage Load Balancer and DNS as a service resource directly through the UI. This simplifies traffic routing and DNS zone management and improves overall service observability.

Service Health Page

A new Service Health page has been added to help administrators monitor the real-time status of core platform services, e.g., compute and image services. This view provides quick visibility into service availability across hosts, simplifying troubleshooting and improving operational awareness.

Improved Image Upload Experience

In this release, we have significantly updated our image upload experience. You can now upload images up to 10 GB in size directly from the UI with clear progress indicators and estimated completion times for upload.

High Availability for Image Library

Starting this release, you can enable the Image Library role on multiple hosts in a region. If the image storage is shared or using a volume type backend, the images on it can be accessed via any of the online Image Library hosts for VM or volume creation.

High Availability for Persistent Storage Service

Self-hosted Private Cloud Director now has the ability to failover the volume management service to other nodes in the cluster with storage roles configured. This feature requires a shared volume type like NFS and the shared volume type to be applied on all the storage nodes.

Automatic Backup for Self-Hosted Private Cloud Director

Self-hosted Private Cloud Director clusters now benefit from automated backups using S3-compatible storage, making disaster recovery faster and more reliable.

Combined User Experience for Virtualized & Kubernetes Clusters

We have now combined the user interface for Virtualized and Kubernetes clusters. You can manage your Virtualized and Kubernetes clusters using a single combined UI view.

Physical Node Cluster Support for Kubernetes

You can now create Kubernetes clusters using bare metal hosts in addition to VMs. This feature is currently only supported via CLI.

Upgrade Notes

With the upgrade to this release that introduces Multiple Cluster Support, VM High Availability (VMHA) and Dynamic Resource Rebalancing (DRR) are now configured per cluster and can no longer be managed via the Cluster Blueprint.

Before upgrading:

  • If VMHA or DRR are currently enabled in the Cluster Blueprint, it is recommended to disable them.

After the upgrade:

  • Create new cluster and explicitly re-enable VMHA and DRR at the individual cluster level as needed.

  • Existing hosts must be added to one of these clusters to continue leveraging VMHA and DRR features.

Improvements & Bug Fixes

Self-Hosted

  • Nodelet Service Starts on Scaled-Up Nodes: Nodes added via scale-up operation now correctly start the pf9-nodeletd service, avoiding post-scale configuration issues.

  • Logrotate Restored on Scaled Management Clusters: logrotate jobs are now correctly applied on new nodes after scaling management clusters, preventing disk space issues.

  • Percona PXC Operator Upgraded with airctl: The airctl upgrade command now also upgrades the Percona PXC operator along with the database pods, ensuring improved stability.

  • MySQL 8 Support: MySQL 8 is now supported across SaaS and on-prem environments. This replaces MySQL 5.7, which has reached end-of-life and continues to use Percona as the MySQL provider for on-prem deployments.

  • S3-Based Backup and Restore for On-Prem: On-premises clusters can now use S3-compatible storage for automatic backups and restores, making disaster recovery faster and more reliable.

  • Multi-Replica Deployment for Stateless Services: Stateless components like API services can now be deployed with multiple replicas, improving availability during host failures or restarts.

  • Kubernetes 1.32 Support: Kubernetes 1.32 clusters are now available available for both, PCD virtualization based clusters as well as Bring Your Own Host (BYOH) clusters.

UI

  • Custom MTU Configuration in UI: You can now set and manage MTU values per network directly through the UI, which helps fine-tune performance for specific workloads.

  • Custom Image Support for Console and Rescue Mode: Users can launch the VM console or boot into rescue mode using their own image directly via the UI, making recovery and debugging easier.

  • Volume Rename and Transfer Support: You can now rename or transfer volumes directly to different tenants in the UI, improving storage flexibility and control.

  • Create and Manage User SSH Keys: Admins can now create and manage SSH key pairs for all Private Cloud Director users.

  • Flavor-Aware Host Filtering for Migrations: Migrations now automatically skip incompatible hosts based on flavor metadata, reducing failures during live VM moves.

  • Host De-authorization from UI: You can now deauthorize hosts directly from the UI’s Hosts tab, simplifying infrastructure cleanup and decommissioning.

  • Live Migration Feedback Improved: The UI now shows clear success or failure messages for VM live migrations, along with error details, if any.

Identity, Storage, Networking Service

  • Use of JWT tokens for PCD Identity Service: PCD Identity Service now uses JWT tokens for more efficient authorization process.

  • Custom Storage Provider Automation: On-prem installs can now automate storage backend setup, enabling easier integration with local storage solutions.

  • Simplified Port Security Disabling: Users no longer need to uncheck all selected security groups before disabling port security, improving the user experience.

  • Host Aggregate Allocation Ratio Support in UI: You can now define CPU and RAM allocation ratios per host aggregate in the UI, enabling better resource oversubscription management.

Known Issues

  • PCD cluster name cannot contain the underscore character today.

  • The availability zone name is modified to the default cluster name from the Feb release. For existing hypervisors from the previous self-hosted install, please disable VM HA in the blueprint first, and then, after upgrading to this release, perform an additional step to update the AZ name manually.

  • SSO users are unable to create Heat orchestration stacks at this time.

  • cloud-ctl config set command is not supported for a user with MFA enabled.

  • VMs with vTPM may enter error state at times after host reboots. You can workaround this issue by performing a hard reboot of the VM.

  • If VM resize fails due to disk permission error, you can resolve this issue by performing a rescue and un-rescue operation on the VM.

  • Currently, rescue mode is only supported for VMs with ephemeral storage. The rescue operation does not work for instances backed by volumes. Users attempting to rescue a volume-backed instance will encounter failures.

  • VM HA and DRR does not support vTPM-enabled VMs. Live migration and evacuation are not possible, so these VMs will not be migrated automatically.

  • PCD metrics data is not automatically backed up in self-hosted PCD deployments. Administrators must manually copy metrics data from the pcd-sc persistent volume in a disaster recovery scenario.

  • Node running airctl for self-hosted mode does not support automated recovery, as some files on the node are needed for running the management plane.

  • Restoring the self-host management plane to a different management cluster with a management plane or cluster VIP that differs from the original cluster fails when using the HostPath CSI driver.

  • A Kubernetes cluster with the same name as a recently deleted cluster may hit TLS certification validation errors while trying to access the API server.

Last updated

Was this helpful?