January 2026 Release
The latest release of Private Cloud Director SaaS includes new features, usability improvements, and resolved issues to enhance product stability and performance. The self-hosted release will follow shortly.
New Features
L2 Networking
This release introduces optimized flow handling for L2-Only Virtual Interface Ports (VIFs). This feature is designed for workloads requiring pure Layer 2 transparency, such as appliances using External DHCP or environments requiring Static IP assignment without cloud-orchestrated IP management.
By bypassing traditional IP-based filtering, these ports function like standard physical switch ports, ensuring seamless communication for non-standard networking configurations.
Multiple GPU Card Passthrough Support
Added support to configure larger PCI MMIO sizes for VM on GPUs like H200. PCD now supports setting the Q35 PCI hole size via the hw_pci_q35_size image property.
Support for Audit Logs via UI and API
Enhanced audit logging for core API events from authentication, compute, networking, images, and storage now persists to the filesystem and is accessible via UI and API for compliance and troubleshooting workflows.
Cluster Add-On Management and MetalLB Upgrade Support
Added a new feature that shows Kubernetes cluster add-ons, including version information, and allows upgrading the MetalLB add-on.
MetalLB Automatic Upgrade Support Available
Added the ability to upgrade the Kubernetes MetalLB addon from 0.14.2 to 0.14.9 on the PCD UI.
Storage Pool Selection for Kubernetes Node VMs
Added a step to select the storage pool for Kubernetes cluster creation with virtualised nodes in the UI.
Enhancements
VM High Availability Improvements
The new VM High Availability status dashboard now displays prerequisite check status: Protected, Protection Degraded, Not Protected, or VMHA Not Enabled.
A new VM HA UI displaying the current evacuation status for any active host offline events, including the ability to retry failed VM evacuations, is now available.
VM HA event telemetry is now available in Grafana for advanced analytics.
UI-Based Support Bundle Generation
Support bundle generation is available directly from the UI, allowing selection of specific hosts or entire clusters for log collection and download without manual command execution.
Simplified Blueprint Creation Workflow
The Blueprint creation workflow has been reimagined for simplicity and ease of use. Volume type creation has been removed from the Blueprint creation workflow, streamlining the configuration process.
Enhanced Multi-Tenant Network Security
Tenant administrators and self-service users can view and use shared networks for VM creation and routing, but cannot modify or delete shared networks. Read-only users are restricted to viewing shared networks only.
Improved Database Load Balancing for Percona Clusters
Percona HAProxy now distributes requests across database nodes, improving read performance and enabling read/write workload separation for high-concurrency database deployments.
Automatic vGPU Restoration after Host Reboot
SR-IOV configuration now persists automatically across host reboots, restoring vGPU functionality without manual intervention.
Improved VM Lease Policy Enforcement
VM-specific lease policies now take priority over tenant-level policies, allowing VMs to maintain custom lease durations that can exceed tenant lease settings.
Permission Validation for SSO Users During Kubernetes Role Assignments
Added a warning note when users attempt to assign Kubernetes RoleBindings on a workload cluster without sufficient permissions, improving clarity for SSO users.
Automatic GPU Operator Enablement
The Nvidia GPU Operator is now automatically enabled when creating Kubernetes clusters with GPU-enabled flavors.
Enhanced Load Balancer Visibility
The "Expose LB" option is now displayed on the Cluster details page for easier access to the load balancer configuration.
MetalLB Version Management
PCD now tracks and maintains a list of supported MetalLB versions for each release, ensuring compatibility.
Extended Node Status Information
Kubernetes node details now display additional labels such as SchedulingDisabled, providing better visibility into node states.
BYOH Cluster Creation Marked as Beta
Bring Your Own Host (BYOH) Kubernetes cluster creation is now labeled as a beta feature.
Streamlined Kubernetes Cluster Management
Removed Sveltos management-related Custom Resources from Kubernetes clusters, simplifying cluster configuration.
Upgrade Notes
During host upgrades, VMHA will be non-operational for the duration of the upgrade process.
Bug Fixes
Infrastructure Management
Resolved an issue where tenant creation failed with an HTTP 500 error in self-hosted PCD deployments.
Fixed proxy configuration issues for hypervisor services, eliminating the need for manual configuration file edits in proxied environments.
Fixed misleading error messages when removing the image library role from hosts using local storage. The system now correctly enforces a single image library host in deployments without shared storage.
Fixed an issue with Image-Volume cache, dramatically improving the performance of creating volumes from images.
Image save operation now handles interface configuration automatically, eliminating manual
OS_INTERFACEenvironment variable switching for CLI users.VM ownership transfer from the PCD UI now supports SSO users.
VM HA now supports evacuating VMs from server groups with a hard affinity policy.
PCD User Interface
Resolved an issue where SSO users were unable to log in to PCD Grafana, enabling consistent authentication across the platform for customers using enterprise SSO integration.
Kubernetes on Private Cloud Director
Resolved an issue where a Kubernetes node continued to be shown as unhealthy even after the node was started.
Resolved an issue where a GPU-enabled Kubernetes cluster could not be re-created under the same name.
Resolved an issue that occurred when creating Kubernetes clusters within PCD tenants containing underscores in their names.
Resolved an issue where the cluster was briefly shown as 'Unhealthy' when creating a new Kubernetes cluster.
Resolved an issue where a Kubernetes node remained in the 'Deleting' state when scaling down a node group.
Resolved an issue in the MetalLB addon that was causing repeated Helm installations.
Resolved a UI issue that occurred when filtering Kubernetes flavors by vCPUs or memory.
Resolved an issue where an important secret was re-created and overwritten.
Resolved an issue where MetalLB ClusterAddons were not cleaned up during cluster deletion, preventing the re-creation of a Kubernetes cluster under the same name.
Resolved an issue where the Kubernetes cluster deletion button did not trigger cluster deletion.
Resolved an issue where merging CertSANs caused duplicate entries to be created in the TCP network profile.
Resolved an issue where the delete button for Kubernetes node groups and nodes did not work.
Resolved an issue where the Kubernetes cluster and node group status were not updated properly when node groups were deleted.
Known Issues and Limitations
DRR does not support vTPM-enabled VMs. Live migration of such VMs is not possible, so these VMs will not be migrated automatically.
If you are using NFS as the backend for block storage, set the
image_volume_cache_enabledflag tofalse. If the flag is set totrue, creating a VM from a cached image volume may lead to incorrect root disk sizing.pcdctl config setcommand is not supported for users with MFA enabled.After upgrading PCD, the compute API, PCI aliases are added back as
Mapinstead ofListfor GPU configurations.Image upload to encrypted volumes is currently unsupported. Volume encryption is currently only supported for empty data volumes.
If you have a network with a DNS domain assigned, and one of its subnets has DNS Publish Fixed IP enabled, then a port created on any subnet within that network will publish a DNS record, irrespective of the subnet's DNS Publish Fixed IP setting.
Some on-premises users experience errors when creating multiple snapshots concurrently due to database read conflicts.
Windows 11 VMs with memory hotplugging display the HOTPLUG_MEMORY_MAX value as the actual attached memory before rebooting. For example, with HOTPLUG_MEMORY=4000 and HOTPLUG_MEMORY_MAX=10000, the effective memory appears to be approximately 10000 MB rather than the expected >4 GB. Rebooting the VM resolves this issue and displays a usable memory value>4 GB.
Application credentials creation is not functional for SSO users. SSO users are unable to create Kubernetes clusters, but they can access the Kubernetes dashboard in the UI, add or edit node groups based on their assigned permissions, and download a kubeconfig file to access the workload cluster with
kubectl.When the control plane is upgraded, but hosts are not, VM traffic may be impacted. Mixed-version environments may not function as expected due to the upgrade from OVN to Caracal. When the
ovn-controllerpackage is reinstalled, and the OVN controller process restarts, a brief VM traffic disruption, including transient packet drops, may be observed. Traffic recovers automatically once the service is back up and flows are reprogrammed. It is recommended to complete all host upgrades before validating workloads or testing network connectivity.Storage metrics have been temporarily hidden from the following locations in the PCD UI until the underlying data accuracy issues are resolved:
Dashboard (Show All Tenants Info)
Cluster Host → Resource Utilisation
Last updated
Was this helpful?
