August 2025 Release
The latest release of Platform9 Private Cloud Director includes new features, usability improvements, and resolved issues to enhance product stability and performance.
New Features
Networking Service Upgrade to 2024.1 (Caracal)
Upgraded the core components of the networking service to the 2024.1 (Caracal) release.
Metadata over IPv6 is now supported in the OVN driver.
Storage Service Upgrade to 2025.1 (Epoxy)
Upgraded the core components of the storage service to the 2025.1 (Epoxy) release.
New features and bug fixes for supported drivers, including NetApp, HPE 3PAR, HPE Alletra, and Pure Storage.
Improvements to multipath setup and management.
Tintri Storage Driver Support
Added Tintri driver as a storage option, eliminating manual deployment workflows. UI provides Tintri backend configuration through standard persistent-storage controls, streamlining enterprise storage deployment for production workloads.
Ubuntu 24.04 Support
Ubuntu 24.04 is now supported for PCD hypervisor hosts. Support for self-hosted management nodes is coming soon.
Enhancements
Streamlined GPU Configuration with Automated vGPU Host Setup
The GPU configuration is now simplified by automating the vGPU host setup process that previously required manual script execution. The system now handles vGPU profile configuration automatically through the UI and backend processes, eliminating the need for the separate vGPU host configureoption in the GPU helper script. This enhancement reduces configuration complexity and potential for user error while accelerating GPU deployment workflows.
Support for Multiple GPU Models
Platform9 supports multiple GPU models per region while maintaining a single GPU model per host configuration across virtualized and Kubernetes workloads.
Enhanced Server Group Management with Direct VM Assignment Controls
In addition to the existing VM management controls, PCD now supports Add Server Group and Remove Server Group actions on existing VMs on the VM list and details page. This action is only supported for Active and Stopped VMs.
Supported Operations:
Add: You can add a single VM at a time to a server group, provided it satisfies the existing affinity or anti-affinity policy rules.
Remove: You can remove one or more VMs from their respective server groups.
Soft Affinity Policy Support for Server Groups
Server groups now support both soft affinity and soft anti-affinity policies, in addition to existing hard constraints. UI and API enable the explicit selection of soft or hard policies during server group creation.
Server Group Quota Management Controls
You can now configure server group and member quotas through Manage Quotas > Compute Quotas.
Support for Retype Operation on In-Use Volumes
Volume retype supports in-use and attached volumes with driver validation and user confirmation. PCD provides compatibility warnings and backend support guidance through enhanced UI controls.
Allow Metrics Scraping from an External Source
You can now scrape host Prometheus metrics from an external source using host-IP:9388 .
Cold Migration and Resize for vTPM-Enabled VMs
You can now cold migrate or resize vTPM-enabled VMs. Live migration for vTPM-enabled VMs is coming soon.
Kubernetes 1.33 Platform Support
Platform9 supports Kubernetes 1.33 across all managed clusters, from this release. Enhanced security controls and performance optimizations deploy with latest container orchestration capabilities.
OCI Registry Support for Airgapped Helm Deployments
You now have support for OCI registries as Helm repositories, enabling airgapped deployments. Please note that this only applies to Kubernetes with Private Cloud Director and not the the virtualization features.
Upgrade Notes
The August 2025 release includes improvements to how cluster hosts are managed. Make sure the following configuration is applied to any hosts that may be missing it.
OS Upgrade from Ubuntu 22.04 to 24.04
For all hosts with a hypervisor role, copy over the apparmor config from the old location to the new location, as shown below, after upgrading to Ubuntu 24.04.
cp /etc/apparmor.d/abstractions/libvirt-qemu.dpkg-dist /etc/apparmor.d/abstractions/libvirt-qemu
systemctl restart libvirtd.service qemu-kvm.serviceBug Fixes
Infrastructure Management
Fixed Resolved hypervisor role authorization errors caused by greenlet module version discrepancy.
Fixed Fixed Grafana password reset functionality. Users can now reset passwords using their registered email address with Platform9.
Fixed Fixed "Identity provider already exists" error when re-enabling the enterprise SSO feature.
Self-Hosted PCD
Fixed Added pre-deployment NTP checks for self-hosted PCD.
Compute and Image Services
Fixed Improved hotplug VM memory allocation transparency. Memory reporting now accounts for hotplug reservation overhead, eliminating discrepancies between configured and usable memory.
Fixed Lease configuration now correctly validates expiry dates, ensuring proper VM termination only for VMs with expiration dates in the future.
Fixed Fixed an issue where vTPM VMs would end up in an error state after a host reboot due to auto-start. Now, vTPM VMs will remain in the shutoff state until powered on by the user upon host reboot.
Storage Service
Fixed VM cloning workflow on the UI now correctly uses the original volume size, eliminating unexpected storage allocation errors.
Networking Service
Added support for jumbo frames in PCD. Networks created via API or CLI now have the following default MTUs set if not specify:
8950for VXLAN8942for GENEVE9000for VLAN & flat networks
The MTUs for networks created with the PCD UI will still use the existing defaults if not specify:
1450for VXLAN1440for GENEVE1500for VLAN & flat networks
This change is being made to give users the flexibility to set MTUs larger than 1500 through either CLI or UI if needed.
Kubernetes on Private Cloud Director
Fixed Resolved SSO login failures when accessing Kubernetes clusters, eliminating “Internal Errors” you may have encountered.
Fixed Subnet assignment in multi-subnet networks during cluster provisioning. Clusters now deploy successfully when multiple subnets are available, eliminating the need for arbitrary subnet selection that previously caused provisioning failures.
Fixed Tenant isolation issue where clusters appeared under the incorrect tenant. Cluster visibility now filters to the selected tenant, ensuring accurate segregation of multi-tenant resources.
Fixed UI crashes in Kubernetes > Access Control > Roles when creating roles with "core" API group selection. Role creation workflow now executes successfully, ensuring reliable RBAC configuration.
Fixed MIG partition configuration persistence during Kubernetes cluster upgrades. GPU nodes now maintain MIG partitioning settings when upgrading from 1.31 to 1.32, preventing automatic reversion to passthrough mode. This ensures consistent GPU resource allocation across cluster upgrade operations.
Fixed An issue occurred where worker nodes in Infrastructure > Clusters incorrectly displayed Running machine status even when the underlying virtual machines had stopped or failed. Machine status indicators now accurately reflect the true state of worker node infrastructure, reducing troubleshooting time when monitoring cluster health.
Fixed The Delete button for Kubernetes clusters continued to be in the Deleting state. The delete button now disables when clusters are marked for deletion, preventing redundant operations.
Fixed Rebalancing frequency now shows minute units on the Edit Cluster workflow on the UI.
Fixed Upgrade button disables for active upgrades, preventing multiple operations through improved status tracking.
Fixed BYOH functionality now works correctly after resolving missing tenant labels in deployment configurations.
Fixed Cluster dropdown filter now functions correctly across Config Maps Details, Secrets Details, and Custom Resource Definitions pages. Prevents page breaks when switching between clusters in these pages.
Fixed Byohctl now defaults to 'service' when parameter omitted. Removed required designation, simplifying usage.
Fixed Byocluster listings now display Ready status column with standard cluster metadata.
Miscellaneous
Fixed Fixed inconsistent custom theme application. Theme colors now apply consistently across all regions.
Known Limitations
VM Migration During Ubuntu Host Upgrades: VM migration is only supported from hosts running Ubuntu 22.04 to Ubuntu 24.04, but not vice versa. To ensure a successful upgrade, you must disable VMHA and DRR features and drain each host before proceeding with the Ubuntu version upgrade.
Rescue mode is only supported for VMs with ephemeral storage. The rescue operation does not work for instances backed by volumes. Users attempting to rescue a volume-backed instance will encounter failures.
No support exists for override configuration for the Image Library host(s). Any manual changes made to the configuration file will not persist across upgrades.
Mors Pod Stability with Leases: When leases are implemented, the mors pod may become unresponsive, preventing timely execution of operations such as VM delete/stop. As a workaround, the mors pod may need to be restarted manually to restore responsiveness. A permanent fix is planned for a future release.
Known Issues
GPU Passthrough is currently supported on host kernel versions up to 6.5.
For vGPU support, please refer to the GPU documentation for more information on the issues below:
If a GPU PCI device is already bound to a driver/module, it needs to be unbound to enable vGPU on the same PCI.
If a GPU host running vGPU VMs is rebooted, vGPU VMs aren't recovered automatically.
When you assign multiple storage backends to a host and remove and add them back again, you may have to manually re-enable these backends.
Find the backends to enable pcdctl volume service list:
pcdctl volume service set --enableRe-enable the required backend(s):
<HOSTID>@<BACKENDNAME> cinder-volume
VM HA and DRR does not support vTPM-enabled VMs. Live migration and evacuation are not possible, so these VMs will not be migrated automatically.
VM HA and DRR does not work for hot-plug VMs when using zero-disk flavors. Live migration and evacuation are not possible, so these VMs will not be migrated automatically.
VM HA does not support evacuation of VMs belonging to server groups with hard affinity policy, so these VMs will not be evacuated automatically in case of host failure.
If a host with persistent storage role assigned goes down and VMs running on that host are also being served their block storage volumes from the same host, there is a known race condition which may result in evacuation of those VMs failing. To avoid this, we recommend assigning block storage role to hosts that are not assigned hypervisor role. This issue is being fixed in the December release of Private Cloud Director.
If you are using NFS as the backend for block storage, set the
image_volume_cache_enabledflag tofalse. If the flag is set totrue, creating a VM from a cached image volume may lead to incorrect root disk sizing.pcdctl config setcommand is not supported for a user with MFA enabled.Image upload to encrypted volumes is currently unsupported. Volume encryption only works with empty volumes at this time.
SSO users cannot log in to PCD Grafana.
If you have a network with a DNS domain assigned, and one of its subnets has DNS Publish Fixed IP enabled, then a port created on any subnet within that network will publish a DNS record, irrespective of the subnet's DNS Publish Fixed IP setting.
Last updated
Was this helpful?
