# July 2025 Release

The latest release of Platform9 <code class="expression">space.vars.product\_name</code> includes new features, usability improvements, and resolved issues to enhance product stability and performance.

***

## Enhancements

#### Enhanced DNS Configuration Across Networking Workflows

This release introduces improved DNS visibility and configuration options across the UI:

* View **DNS Zone** information on **Network and Security > Physical Networks** and **Network and Security > Virtual Networks.**
* Add DNS Domain and DNS Name when creating a **Public IP** from **Network and Security > Public IPs > Create Public IP.**
* Enable DNS publishing when creating or editing a physical network from **Network and Security > Physical Networks**

These enhancements streamline DNS setup and integration for floating IPs and subnet configurations.

#### DNS Domain Association for Networks

The <code class="expression">space.vars.product\_acronym</code> console now supports associating DNS domains with networks during creation and through updates. You can now specify DNS zones (domains) to attach to networks directly through the console interface, completing the designate service integration. This enhancement brings the previously CLI-only functionality to the console, enabling full DNS domain management capabilities for network resources through the graphical interface.

#### DNS Name Configuration for Ports

The <code class="expression">space.vars.product\_acronym</code> console now supports DNS name configuration for network ports. You can specify a DNS name when creating a new port or update existing ports with DNS names. The system automatically creates corresponding DNS records in Designate using the specified DNS name.

This enhancement simplifies DNS management by allowing direct DNS name assignment through the console interface during port configuration workflows.

To use this feature, ensure your network has DNS publishing enabled (`dns-publish-fixed-ip` flag) and an assigned DNS zone.

#### App Catalog now supported for On-Prem deployments

Platform9 has enabled the App Catalog feature for on-premises environments. This update improves application lifecycle management for on-prem users.

#### Support for Retyping In-Use Volumes

You can now retype volumes that are currently attached or in use. When initiating a retype operation, the <code class="expression">space.vars.product\_acronym</code> console displays a warning that migration is driver dependent and may fail based on the source or destination volume driver.

To proceed, you must:

* Select a new volume type from those enabled on active storage backends.
* Acknowledge the risks by confirming the checkbox before initiating the retype.

These changes provide more flexibility while preserving workload safety during active volume transitions.

#### Enhanced Service Health Checks for Critical Operations

The console now checks the health of key services like Compute, Storage, and Networking before allowing critical operations. If any of these services are unresponsive, a banner will appear, stating Critical services are not responding to help you avoid failures during operations such as VM creation. This enhancement provides better visibility into control plane health before initiating key workflows.

#### Improved Grafana Login Experience

Grafana now supports login using <code class="expression">space.vars.product\_name</code> credentials. On fresh installs, you can log in to Grafana using the same credentials as your <code class="expression">space.vars.product\_acronym</code> console by clicking on the **Sign in with PCD** button on the Grafana login screen. Alternatively, you can log in using Grafana's default admin credentials.

Please refer to the Upgrade Notes section for expected behavior and workarounds for older deployments upgraded to the July release.

#### `BETA` Enhanced Storage Support for Windows Clusters

Added support for shared storage that enables Windows Server clustering in OpenStack environments. This feature provides VMware-like capabilities, allowing multiple virtual machines to share the same storage devices safely.

***

## Upgrade Notes

#### Volumes Placed on Incorrect NFS Backend

> This release fixed an issue where if you had multiple NFS backends on one host, volumes would be placed in the incorrect backends. If you are upgrading from a previous release and have multiple NFS backends, you should specify different directories where you want each volumes to be created on the host for each of the backends in the blueprint in the field `nfs_mount_point_base`. The directories should be unique for each of the backends. Ensure that if you specify a directory that is not in `/opt/pf9/pf9-cindervolume-base`, you should pre-create it and set the owner to `pf9:pf9group`. Lastly, update the `nfs_shares_config` for each of the backends to be unique in the blueprint.
>
> **Grafana** <code class="expression">space.vars.product\_acronym</code> **Authentication**

When you upgrade from the June release to the July release, you may encounter a user sync error when logging into Grafana with <code class="expression">space.vars.product\_acronym</code> credentials. This error occurs when a local Grafana user account exists with the same username as your <code class="expression">space.vars.product\_acronym</code> user account.

As a solution, perform these steps to resolve the user sync error:

1. Log in to Grafana using your existing local credentials (for example, the default admin user).
2. Create a new admin user:

* Navigate to the user management section.
* Create a new user account.
* Assign both the Grafana Admin role and the Organization Admin role to this user.

3. Delete conflicting users:

* Identify local users whose usernames match your <code class="expression">space.vars.product\_acronym</code> usernames.
* Delete these conflicting local user accounts from Grafana.

4. Test <code class="expression">space.vars.product\_acronym</code> login using your <code class="expression">space.vars.product\_acronym</code> credentials with the previously conflicting username.

**Optional, configure Dashboard Permissions**

After you complete the upgrade, you must manually add user permissions to Grafana dashboards:

1. Navigate to **Dashboards > Edit > Settings > Permissions.**
2. Add viewer or admin permissions for your SSO user account.
3. Repeat this process for each dashboard that requires access.

{% hint style="info" %}
**NOTE**

This manual step applies to upgrades from both April and June releases.
{% endhint %}

**Upgrade Behavior Summary**

* Pre-June to July release upgrades: You can continue using Grafana's default credentials (`admin/admin`) or switch to <code class="expression">space.vars.product\_acronym</code> login with your <code class="expression">space.vars.product\_acronym</code> credentials.
* July release deployments: Both authentication methods work without additional configuration.
* June to July release upgrades: Follow the resolution steps above if you encounter user sync errors.

***

## Bug Fixes

### Identity, Storage, and Networking Services

> `Fixed` Resolved an issue where live migrations and hard reboots caused hotplug VMs to lose their configuration and behave as static VMs, breaking hotplug functionality.

`Fixed` Resolved an issue where live migration and DRR operations caused packet loss in VMs. Upgraded the networking backend to OVN version 23.0, which reduced packet drops to zero in most cases.

`Fixed` Resolved an issue where VM creation failed with `No valid host found` due to stale resource allocations in the compute scheduler. The compute service now avoids allocating resources during automated migration planning, preventing phantom allocations.

`Fixed` Improved CPU model selection strategy on hypervisors to ensure the latest supported model is used. This fix will prevent VMs from getting stuck in the `Booting from the disk` state due to unsupported CPU configurations.

`Fixed` Fixed an issue that resulted in volumes getting created in incorrect NFS backends when a host had two or more NFS backends created.

### <code class="expression">space.vars.product\_acronym</code> User Interface

`Fixed` Fixed an issue where assigning new metadata to a flavor removed all existing metadata from that flavor.

`Fixed` Any updates to the metadata during flavor edit caused the flavor to update automatically without selecting **Assign Metadata**.

`Fixed` Fixed an issue where **Virtual Machines > Images** displayed only 25 images, causing existing images to disappear from view and blocking VM deployments via the console. After the fix, the list now displays all images onboarded, including those beyond the initial 25 images.

`Fixed` Fixed an issue on **Network and Security > Security Groups > Create Security Group** where, during **Inbound Security Group Rules** creation, **Custom ICMP Rule** did not retain input. Although **ICMP Type** **and ICMP Code** were mandatory, the values were not saved, and the rule was incorrectly allowed to persist with empty fields.

`Fixed` Improved error feedback and end-user messaging across various screens.

`Fixed` Disabled migration for VMs in suspended state to prevent unsupported operations that previously failed without an error.

`Fixed` Cluster selection is now mandatory when creating a VM using an existing volume or VM snapshot, which was previously missing.

`Fixed` Added validation for volume backend name when configuring a cluster blueprint to allow only letters, digits, hyphens (-), and underscores (\_), and disallow spaces or multiple words.

`Fixed` Fixed an issue where hotplug compatible flavors were incorrectly listed during VM creation even when the **Hot-plug Compatible** option was not enabled.

`Fixed` Enabled metadata editing for `pf9-managed` host aggregates on **Infrastructure > Host Aggregates**.

### Self-Hosted <code class="expression">space.vars.product\_acronym</code>

`Fixed` Fixed an issue where support bundle generation failed for hypervisor hosts in on-prem environments. Once password-less SSH access was configured from the Airctl host to the hypervisors, support bundles were successfully collected using the specified hypervisor IPs.

Here is an example:

{% tabs %}
{% tab title="Bash" %}

```bash
airctl gen-support-bundle  --host-ips <HOST-IP>
```

{% endtab %}
{% endtabs %}

### Miscellaneous

`Fixed` Resolved an issue where the GPU status was displayed as disabled for one of the GPU nodes in a multi-node cluster.

`Fixed` Fixed an issue where GPU Passthrough mode validation was reporting IOMMU not configured even though GPU host was successfully configured with IOMMU and onboarded to <code class="expression">space.vars.product\_acronym</code>.

`Fixed` Optimized the performance of the Virtual Machines page to be efficient at scale by adding pagination and enhancing backend query handling.

`Fixed` Fixed an issue where deleted regions left residual metadata in the service catalog, causing them to appear in the OpenStack CLI. Region deletion now ensures complete cleanup of associated entries.

`Fixed` Fixed an issue on **Virtual Machines > Virtual Machines > Deploy New VM** where hotplug compatible flavors did not require CPU and memory values when the **Hot-plug compatible** option was enabled, causing VM creation to fail. Now, CPU and memory fields are validated to ensure required values are set for hotplug compatible configurations.

`Fixed` Fixed an issue where cold migration temporarily showed a failed state before completing successfully.

`Fixed` Fixed an issue that caused the new image upload dialog to close automatically when a previously triggered image upload completed. This interrupted ongoing uploads and prevented users from uploading additional images.

## Known Limitations

* **Cold Migration Unsupported for Hotplug VMs**: Cold migration is not supported for hotplug enabled VMs. Attempting cold migration causes the VM to lose its hotplug configuration and revert to a static configuration.
* **GPU Passthrough Limitation for VM Creation:** When using GPU passthrough mode, only one GPU host configuration is allowed per region.
* **GPU VM Creation Fails with `No Valid Host Was Found` Error:** You may see an error of **No valid host was found. There are not enough hosts available** when creating a VM using GPU passthrough flavors. This can occur if **SR-IOV** is not enabled for the GPU device. It is recommended to verify if the GPU supports **SR-IOV** and enable the same before configuring GPU passthrough.
* **Kubernetes Cluster Names Must Be Unique Across Regions:** Two clusters cannot share the same name across regions within the same tenant.
* **Tenant Name Restriction:** Spaces are not supported in tenant names. Use only alphanumeric characters, dashes, or underscores.

## Known Issues

* Re-enabling SSO after it has been disabled may fail with the error `Identity provider already exists`. This occurs due to a check that prevents the reuse of the same identity provider across domains.
* When you assign multiple storage backends to a host and remove and add them back again, you may have to manually re-enable these backends.
  1. Find the backends to enable pcdctl volume service list: `pcdctl volume service set --enable`
  2. Re-enable the required backend(s): `<HOSTID>@<BACKENDNAME> cinder-volume`
* VM HA does not honor the host liveness traffic network interface configured in the cluster blueprint in this release.
* VM HA and DRR does not support vTPM-enabled VMs. Live migration and evacuation are not possible, so these VMs will not be migrated automatically.
* If you are using NFS as the backend for block storage, set the `image_volume_cache_enabled` flag to `false`. If the flag is set to `true` , creating a VM from a cached image volume may lead to incorrect root disk sizing.
* SSO users are unable to create Heat orchestration stacks at this time.
* `pcdctl config set` command is not supported for a user with MFA enabled.
* Image upload to encrypted volumes is currently unsupported. Volume encryption only works with empty volumes at this time.
* Currently, rescue mode is only supported for VMs with ephemeral storage. The rescue operation does not work for instances backed by volumes. Users attempting to rescue a volume-backed instance will encounter failures.
