Upgrade Management Plane and Hosts

The upgrade of the self-hosted Private Cloud Director consists of two-phase sequential upgrades.

  • Management Plane: Upgrading the Management Plane updates core services, management APIs, region-specific configurations, and orchestration components.

  • Hosts: Once the management plane is upgraded, hosts in each region must be upgraded to ensure compatibility, seamless communication, and completion of the overall upgrade process.

Before you begin the upgrade, ensure you meet all prerequisites.

Prerequisites for Upgrade

circle-info

NOTE

Review the Upgrade Notes for the June 2025 Release.

Manual Backup

You must perform a Manual Backup Procedure of your management plane infrastructure before starting the upgrade. This backup is critical for recovery.

Upgrade Management Plane and Cluster

Upgrade your Private Cloud Director environment to access the latest features, security patches, and performance improvements through this two step process.

Upgrade Management Plane

Updating the Management Plane running on the Management Cluster, ensures multiple components are automatically upgraded as a part of an airctl upgrade process. Here are the components that will be affected.

  • Core services and applications.

  • Management APIs and user interfaces.

  • Region-specific configurations and services.

  • Service coordination and orchestration components.

The Impact from Management Plane Upgrade

Temporary service interruptions may occur during the upgrade process.

  • The user interface will be unavailable for the duration of the upgrade.

  • All API services will be temporarily unavailable.

  • Create, Read, Update, and Delete operations will not be possible during this time.

  • Running workloads on VMs will not be impacted.

Step 1: Download and Run Installer Artifacts

airctl is the command-line installer for Private Cloud Director. Run the following commands only on a verified management cluster node to download airctl along with the required installation artifacts.

  1. Download the Installer Script and Artifacts using the following command.

You can choose to download a specific version of the installer script and artifact. For example, you can replace latest with a specific version, such as v-2025.6.0-3931504. Here is a sample modified command:

circle-info

NOTE

Replace <YOUR_USER_AGENT_KEY> with your user key.

  1. Make the Installer Executable using the following command.

  1. Run the Installation Script using the following command.

This installs the specific version of airctl based on the version.txt file.

circle-info

NOTE

It is recommended to run a airctl status to ensure all the desired services are equal to ready services before you begin the upgrade.

  1. Add symbolic link for airctl to the system path so that it can be invoked without specifying full path. Also, create a symbolic link to the airctl-config.yamlfile in the users home directory so that the configuration file need not be specified for every airctl invocation.

Proxy Configuration (Optional)

If your environment uses a network proxy, update the values /opt/pf9/airctl/conf/helm_values/kplane.template.yml again, as the changes would be lost after a new build is installed. Update it again as shown below:

The list of I.P. addresses in the no_proxy list should include the master-ips, worker-ips, external-ip4, master-vip4 along with any other addresses for which the traffic should not be routed via proxy server.

Step 2: Upgrade the Management Cluster

To upgrade the management cluster to Kubernetes version 1.30, run the following command:

This step ensures that the core management components of your product_name deployment are updated to the Kubernetes version 1.30.

Ensure this step is completed before proceeding with any region-specific tenant upgrades.

Step 3: Upgrade all regions

To upgrade all regions setup on your Private Cloud Director, execute the following command.

Optionally, you can upgrade a specific region, by replacing <region-name> with your target region.

Here is the modified sample command.

To monitor and diagnose the upgrade logs, add --verbose.

Here is the sample command.

Verify: Upgrade Success on Management Plane

After completing the upgrade for the management plane, verify if you can continue to access the user interface, then verify if the upgrade was successful by checking the deployment status across all regions.

Run the following command:

Here is the sample output.

After a successful upgrade and verification, Private Cloud Director does not support rollback to a previous version.

Upgrade the Hosts

After the management plane is successfully upgraded, the hosts running on each of your regions must be updated. The agents running on the host manage communication between hosts and the management plane.

The Impact from Hosts Upgrade

  • No service unavailability is expected during the host upgrade.

  • Workloads running on VMs will continue to operate without disruption.

Step 1: Record Current Version of Packages on Host

Before you begin the upgrade, execute the following command on each onboarded host to record the current package versions:

Step 2: Upgrade Hosts

To upgrade the hosts in a specific region, execute the following commands on the management cluster node:

This triggers the creation of a host-upgrade-xxxx pod in the corresponding region namespace. You can monitor the upgrade progress or verify its success by checking the pod status and logs:

Here is a sample output:

If a host fails to upgrade, execute the following command to re-run the upgrade using its host ID (found in the pod logs):

Step 3: Verify Upgrade Status of the Hosts

After the upgrade, confirm that the host packages have been successfully updated by running the following command again on each host:

Compare the output with the one recorded before the upgrade to ensure the packages were updated as expected.

Recovery: Upgrade Failure

Step 1. Stop the Failed Management Cluster

This command shuts down all components of the existing management cluster. --verbose helps verify the detailed output of all stopped services.

Step 2. Delete the Management Cluster Configuration

This command removes all existing configuration files and metadata associated with the management cluster. Using --force enables overriding any locked or incomplete states.

Step 3: Delete the Existing Management Cluster

This command permanently deletes the management cluster resources. Ensure that you have backed up all critical data before running this command

Step 4. Create a New Management Cluster

This command creates a new management cluster with the same configuration as before

Step 5. Restore from the Backup

Replace <backup-directory>with the actual path of your stored backup. This command restores the environment from the backup.

circle-exclamation

Last updated

Was this helpful?