January 2026 Release
The latest release of Private Cloud Director SaaS includes new features, usability improvements, and resolved issues to enhance product stability and performance. The self-hosted release will follow shortly.
New Features
L2 Networking
This release introduces optimized flow handling for L2-Only Virtual Interface Ports (VIFs). This feature is designed for workloads requiring pure Layer 2 transparency, such as appliances using External DHCP or environments requiring Static IP assignment without cloud-orchestrated IP management.
By bypassing traditional IP-based filtering, these ports function like standard physical switch ports, ensuring seamless communication for non-standard networking configurations.
Multiple GPU Card Passthrough Support
Added support to configure larger PCI MMIO sizes for VM on GPUs like H200. PCD now supports setting the Q35 PCI hole size via the hw_pci_q35_size image property.
Support for Audit Logs via UI and API
Enhanced audit logging for core API events from authentication, compute, networking, images, and storage now persists to the filesystem and is accessible via UI and API for compliance and troubleshooting workflows.
Cluster Add-On Management and MetalLB Upgrade Support
Added a new feature that shows Kubernetes cluster add-ons, including version information, and allows upgrading the MetalLB add-on. In self-hosted deployments, tenant creation returned an HTTP 500 error. This has been corrected, and tenant creation completes successfully.
MetalLB Automatic Upgrade Support Available
Added the ability to upgrade the Kubernetes MetalLB addon from 0.14.2 to 0.14.9 on the PCD UI.
Storage Pool Selection for Kubernetes Node VMs
Added a step to select the storage pool for Kubernetes cluster creation with virtualised nodes in the UI.
Q35: Compatibility of Machine Types with Ubuntu 24.04 for UEFI, GPU, and GPU Passthrough Virtual Machines
Added support for the Q35 machine type with native Ubuntu 24.04 and the latest OVMF release to enable UEFI-based GPU passthrough virtual machines.
Enhancements
VM High Availability Improvements
The new VM High Availability status dashboard now displays prerequisite check status: Protected, Protection Degraded, Not Protected, or VMHA Not Enabled.
A new VM HA UI displaying the current evacuation status for any active host offline events, including the ability to retry failed VM evacuations, is now available.
VM HA event telemetry is now available in Grafana for advanced analytics.
UI-Based Support Bundle Generation
Support bundle generation is available directly from the UI, allowing selection of specific hosts or entire clusters for log collection and download without manual command execution.
Simplified Blueprint Creation Workflow
The Blueprint creation workflow has been reimagined for simplicity and ease of use. Volume type creation has been removed from the Blueprint creation workflow, streamlining the configuration process.
Enhanced Multi-Tenant Network Security
Tenant administrators and self-service users can view and use shared networks for VM creation and routing, but cannot modify or delete shared networks. Read-only users are restricted to viewing shared networks only.
Improved Database Load Balancing for Percona Clusters
Percona HAProxy now distributes requests across database nodes, improving read performance and enabling read/write workload separation for high-concurrency database deployments.
Automatic vGPU Restoration after Host Reboot
SR-IOV configuration now persists automatically across host reboots, restoring vGPU functionality without manual intervention.
Improved VM Lease Policy Enforcement
VM-specific lease policies now take priority over tenant-level policies, allowing VMs to maintain custom lease durations that can exceed tenant lease settings.
Automatic CPU and Memory Maximums for Hotplug VMs
Simplified VM hotplug configuration by removing the need to manually set maximum CPU and memory values. Private Cloud Director now sets a high default for hot‑add VMs derived from host capacity.
UI Performance Improvements at Scale
The performance and responsiveness of the Private Cloud Director UI at scale have been improved significantly, enabling efficient management of large-scale environments.
Permission Validation for SSO Users During Kubernetes Role Assignments
Added a warning note when users attempt to assign Kubernetes RoleBindings on a workload cluster without sufficient permissions, improving clarity for SSO users.
Automatic GPU Operator Enablement
The Nvidia GPU Operator is now automatically enabled when creating Kubernetes clusters with GPU-enabled flavors.
Enhanced Load Balancer Visibility
The "Expose LB" option is now displayed on the Cluster details page to make the load balancer configuration easier to access.
MetalLB Version Management
PCD now tracks and maintains a list of supported MetalLB versions for each release, ensuring compatibility.
Extended Node Status Information
Kubernetes node details now display additional labels such as SchedulingDisabled, providing better visibility into node states.
BYOH Cluster Creation Marked as Beta
Bring Your Own Host (BYOH) Kubernetes cluster creation is now labeled as a beta feature.
Streamlined Kubernetes Cluster Management
Removed Sveltos management-related Custom Resources from Kubernetes clusters, simplifying cluster configuration.
Q35 Machine Type Support for Ubuntu 24.04 UEFI GPU GPU passthrough virtual machines.
Added support for the Q35 machine type with native Ubuntu 24.04 and the latest OVMF release to enable UEFI-based GPU passthrough virtual machines.
Component Versions
OVN is on version 24.03.2 for Ubuntu 22.04 and 24.03.6 for Ubuntu 24.04
OVS is on version 3.3.1 for Ubuntu 22.04 and 3.3.6 for Ubuntu 24.04
Upgrade Notes
Private Cloud Director now supports up to 400 nodes in a region.
During host upgrades, VMHA will be non-operational for the duration of the upgrade process.
When the control plane is upgraded, but hosts are not, VM traffic may be impacted. Mixed-version environments may not function as expected due to the upgrade from OVN to Caracal. When the
ovn-controllerpackage is reinstalled, and the OVN controller process restarts, a brief VM traffic disruption, including transient packet drops, may be observed. Traffic recovers automatically once the service is back up and flows are reprogrammed. It is recommended to complete all host upgrades before validating workloads or testing network connectivity.Storage metrics have been temporarily hidden from the following locations in the PCD UI until the underlying data accuracy issues are resolved:
Dashboard (Show All Tenants Info)
Cluster Host → Resource Utilization
Bug Fixes
Infrastructure Management
Tenant creation no longer fails with an HTTP 500 error in self-hosted PCD deployments.
Proxy configuration for hypervisor services is now handled automatically, eliminating the need to edit configuration files in proxied environments.
Error messages when removing the image library role from hosts using local storage are now accurate. The system correctly enforces a single image library host in deployments without shared storage.
Volume creation from images is now significantly faster following a fix to the Image-Volume cache.
Image save operations now handle interface configuration automatically — CLI users no longer need to manually switch the
OS_INTERFACEenvironment variable.VM ownership transfer via the PCD UI now supports SSO users.
VM High Availability now supports evacuating VMs from server groups configured with a hard affinity policy.
PCD User Interface
SSO users can now log in to PCD Grafana, enabling consistent authentication across the platform for customers using enterprise SSO integrations.
VM ownership transfer via the PCD UI did not support SSO users. Ownership transfer now works for SSO-authenticated users.
Kubernetes on Private Cloud Director
Resolved an issue where a Kubernetes node continued to be shown as unhealthy even after the node was started.
Resolved an issue where a GPU-enabled Kubernetes cluster could not be re-created under the same name.
Resolved an issue that occurred when creating Kubernetes clusters within PCD tenants containing underscores in their names.
Resolved an issue where the cluster was briefly shown as 'Unhealthy' when creating a new Kubernetes cluster.
Resolved an issue where a Kubernetes node remained in the 'Deleting' state when scaling down a node group.
Resolved an issue in the MetalLB addon that was causing repeated Helm installations.
Resolved a UI issue that occurred when filtering Kubernetes flavors by vCPUs or memory.
Resolved an issue where an important secret was re-created and overwritten.
Resolved an issue where MetalLB ClusterAddons were not cleaned up during cluster deletion, preventing the re-creation of a Kubernetes cluster under the same name.
Resolved an issue where the Kubernetes cluster deletion button did not trigger cluster deletion.
Resolved an issue where merging CertSANs caused duplicate entries to be created in the TCP network profile.
Resolved an issue where the delete button for Kubernetes node groups and nodes did not work.
Resolved an issue where the Kubernetes cluster and node group status were not updated properly when node groups were deleted.
Resolved an issue where SSO users could not manage Kubernetes clusters via the UI.
Known Issues and Limitations
Saving Blueprint with one or more private volume types fails.
Logging in to a non-default domain via SSO logs into the default domain.
When using VXLAN or Geneve segmentation, creating a VLAN-backed provider network fails if the chosen VLAN ID falls within the blueprint’s configured VNID/Tunnel ID range.
SSO configuration via ADFS fails when the ADFS server presents a self-signed TLS certificate, preventing the download of the federation metadata required to complete the configuration.
On Simple Network VMs, a managed network port assigned at creation receives an IP address from Networking, but the IP is not being propagated to the VM's network interface. Assigning the IP address directly to the VM is a viable workaround.
Network configuration provided via cloud-init user data (netplan) is not being applied on L2 VMs in Simple Network mode. The
network_jsonmetadata delivered to the VM is empty, preventing cloud-init from configuring the network interface on boot.When a Fibre Channel or iSCSI Storage backend is configured as the Image service store with Image Library HA enabled, image volumes may become stuck in
in-useorattachingstate, preventing VM creation from cached images. Using a single Image Library host is recommended as a workaround if caching support is not available.Suspending a VM that uses vGPU (MIG/mdev) slices is not supported. Attempting this operation causes the VM to enter an error state in the UI while continuing to run on the hypervisor. Not invoking the Suspend action on vGPU VMs is recommended.
DRR does not support vTPM-enabled VMs. Live migration of such VMs is not possible, so these VMs will not be migrated automatically.
If you are using NFS as the backend for block storage, set the
image_volume_cache_enabledflag tofalse. If the flag is set totrue, creating a VM from a cached image volume may lead to incorrect root disk sizing.pcdctl config setcommand is not supported for users with MFA enabled.After upgrading PCD, the compute API, PCI aliases are added back as
Mapinstead ofListfor GPU configurations.GPU passthrough UEFI VM creation fails on Ubuntu 24.04 GPU host with the latest OVMF version (2024.x/2025.x). Downgrading OVMF version to 2022.x will help resolve this issue.
Image upload to encrypted volumes is currently unsupported. Volume encryption is currently only supported for empty data volumes.
If you have a network with a DNS domain assigned, and one of its subnets has DNS Publish Fixed IP enabled, then a port created on any subnet within that network will publish a DNS record, irrespective of the subnet's DNS Publish Fixed IP setting.
Windows 11 VMs with memory hotplugging display the HOTPLUG_MEMORY_MAX value as the actual attached memory before rebooting. For example, with HOTPLUG_MEMORY=4000 and HOTPLUG_MEMORY_MAX=10000, the effective memory appears to be approximately 10000 MB rather than the expected >4 GB. Rebooting the VM resolves this issue and displays a usable memory value>4 GB.
Last updated
Was this helpful?
