Upgrade Management Plane and Hosts

The upgrade of the self-hosted Private Cloud Director is a two-phase, sequential process:

  • Management Plane: Upgrading the Management Plane updates core services, management APIs, region-specific configurations, and orchestration components.

  • Hosts: Once the management plane has been upgraded, hosts in each region are being upgraded to ensure compatibility, seamless communication, and completion of the overall upgrade process

Before you begin the upgrade, ensure you meet all prerequisites.

Prerequisites for Upgrade

NOTE

Review Upgrade Notes for October 2025 Release.

Verify Management Plane Health

Before starting the upgrade, ensure the Management Plane's health is in a good state. Run airctl status and check that the Region Health shows as Ready. If it's ready, you can proceed with the upgrade. If not, look into the affected regions, fix any issues, and resolve problems such as pods stuck in a CrashLoopBackOff state.

Manual Backup

You must perform a Manual Backup Procedure of your management plane infrastructure before starting the upgrade. This backup is critical for recovery.

Upgrade Management Plane and Cluster

Upgrade your Private Cloud Director environment to access the latest features, security patches, and performance improvements through this two-step process.

Upgrade Management Plane

Updating the Management Plane running on the Management Cluster ensures multiple components are automatically upgraded as part of an airctl upgrade process. Here are the components that will be affected.

  • Core services and applications.

  • Management APIs and user interfaces.

  • Region-specific configurations and services.

  • Service coordination and orchestration components.

The Impact of Management Plane Upgrade

Temporary service interruptions may occur during the upgrade process.

  • The user interface will be unavailable during the upgrade.

  • All API services will be temporarily unavailable.

  • Create, Read, Update, and Delete operations will not be possible during this time.

  • Running workloads on VMs will not be impacted.

Step 1: Download and run Installer Artifacts

airctl is the command-line installer for Private Cloud Director. Run the following commands only on a verified management cluster node to download airctl with the required installation artifacts.

  1. Download the Installer Script and Artifacts using the following command.

curl --user-agent "<YOUR-USER-AGENT-KEY>" https://pf9-airctl.s3-accelerate.amazonaws.com/latest/index.txt | awk '{print "curl -sS --user-agent \"<YOUR_USER_AGENT_KEY>\" \"https://pf9-airctl.s3-accelerate.amazonaws.com/latest/" $NF "\" -o ${HOME}/" $NF}' | bash

You can choose to download a specific version of the installer script and artifact.

For example, you can replace latest with a specific version, such as v-2025.10.1-4204351 Here is a sample modified command:

curl --user-agent "<YOUR_USER_AGENT_KEY>" https://pf9-airctl.s3-accelerate.amazonaws.com/v-2025.10.1-4204351/index.txt | \
awk '{print "curl -sS --user-agent \"<YOUR_USER_AGENT_KEY>\" \"https://pf9-airctl.s3-accelerate.amazonaws.com/v-2025.10.1-4204351/" $NF "\" -o ${HOME}/" $NF}' | bash

NOTE

Replace <YOUR_USER_AGENT_KEY> with your user key.

  1. Make the Installer Executable using the following command.

chmod +x ./install-pcd.sh
  1. Run the Installation Script using the following command.

./install-pcd.sh `cat version.txt`

The command installs the specific version of airctl based on the version.txt file.

NOTE

It is recommended to run a airctl status to ensure all the desired services are equal to ready services before you begin the upgrade.

  1. Add a symbolic link for airctl to the system path so that it can be invoked without specifying the full path. It is recommended to create a symbolic link to the airctl-config.yamlfile in the user's home directory so that the configuration file need not be specified for every airctl invocation.

sudo rm -f /usr/bin/airctl # delete any existing file or symlink
sudo ln -s /opt/pf9/airctl/airctl /usr/bin/airctl
ln -s /opt/pf9/airctl/conf/airctl-config.yaml $HOME/airctl-config.yaml

Proxy Configuration (Optional)

If your environment uses a network proxy, update the values /opt/pf9/airctl/conf/helm_values/kplane.template.yml again, as the changes would be lost after a new build is installed. Update it again as shown below:

cat /opt/pf9/airctl/conf/helm_values/kplane.template.yml | grep proxy
# Sample output:
https_proxy: "http://squid.platform9.horse:3128"
http_proxy: "http://squid.platform9.horse:3128"
no_proxy: "10.149.106.11,10.149.106.12,10.149.106.13,10.149.106.14,10.149.106.15,10.149.106.16,127.0.0.1,10.20.0.0/22,localhost,::1,.svc,.svc.cluster.local,10.21.0.0/16,10.20.0.0/16,.cluster.local,.platform9.localnet,.default.svc"

The list of I.P. addresses in the no_proxy list should include the master-ips, worker-ips, external-ip4, master-vip4 along with any other addresses for which the traffic should not be routed via proxy server.

NOTE

The upgrade-cluster step is skipped in this release, as the Kubernetes version remains unchanged at v1.30.

Step 2: Upgrade all regions

To upgrade all regions set up on your Private Cloud Director, execute the following command.

airctl upgrade

Optionally, you can upgrade a specific region by replacing <REGION_NAME> with your target region.

airctl upgrade --region <REGION_NAME>

Here is the modified sample command.

# airctl upgrade --region Region1
 INFO  rollback state directory /tmp/airctl_ddu_backup_Region1_240020739                           
 INFO  Saving the helm revisions to the state file                                        
 INFO  --- backing up region--- Region1                                                             
 INFO  Archive created successfully for Region1. Backup backup.tar.gz saved to /tmp/airctl_ddu_backup_Region1_240020739                                                        
 INFO  --- moving old state file to /tmp/airctl_ddu_backup_Region1_240020739/state.yaml ---        
 INFO  --- moving old kplane_values.yaml file to /tmp/airctl_ddu_backup_Region1_240020739/kplane_values.yaml ---                                     
 SUCCESS  Upgrading region Region1                                                      
upgrade done

To monitor and diagnose the upgrade logs, add --verbose.

Here is the sample command.

airctl upgrade --verbose

Verify: Upgrade Success on Management Plane

After completing the upgrade for the management plane, verify that you can continue to access the user interface, and then verify the deployment status across all regions.

Run the following command:

airctl status

Here is the sample output.

------------- deployment details ---------------
fqdn:                airctl-1-4206457-802.platform9.localnet
region:              airctl-1-4206457-802
deployment status:   ready
region health:       ✅ Ready
version:              PCD 2025.10-180
-------- region service status ----------
desired services:     30
ready services:       30


------------- deployment details ---------------
fqdn:                airctl-1-4206457-802-mel.platform9.localnet
region:              airctl-1-4206457-802-mel
deployment status:   ready
region health:       ✅ Ready
version:              PCD 2025.10-180
-------- region service status ----------
desired services:     84
ready services:       84

After a successful upgrade and verification, Private Cloud Director it does not support rolling back to a previous version.

Upgrade the Hosts

After the management plane is successfully upgraded, the hosts running in each of your regions must be updated. The agents running on the Host manage communication between hosts and the management plane.

The Impact from Hosts Upgrade

  • No servHostavailability is expected during the host upgrade.

  • Workloads running on VMs will continue to operate without disruption.

Step 1: Record Current Version of Packages on Host

Before you begin the upgrade, execute the following command to record the current package versions:

$ airctl host-status --config /opt/pf9/airctl/conf/airctl-config.yaml
Getting host statuses...                                                                                                                                                                                                                                                      
Getting host statuses for region: [REGION_NAME]                                                                                                                                                                                                                                         
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐                                                   
| Host Name   | IP Addresses                              | Host ID      | Host Agent Version     | Status | Agent Status | Apps                                          |
| [HOSTNAME1] | 172.29.32.34, 192.168.122.1, 10.0.11.10   | [HOST1_UUID] | 2025.10.1-2642.6f598c0 | ok     | running      | pf9-cindervolume-config:2025.10.1-2595        |
|             |                                           |              |                        |        |              | pf9-cindervolume-base:2025.10.1-2595          |
|             |                                           |              |                        |        |              | pf9-neutron-ovn-controller:2025.10.1-3183     |
|             |                                           |              |                        |        |              | pf9-ostackhost:2025.10.1-6816                 |
|             |                                           |              |                        |        |              | pf9-glance-role:2025.10.1-3266                |
|             |                                           |              |                        |        |              | pf9-neutron-ovn-metadata-agent:2025.10.1-3183 |
|             |                                           |              |                        |        |              | pf9-neutron-base:2025.10.1-3183               |
| [HOSTNAME2] | 172.29.32.110, 192.168.122.1, 10.0.11.102 | [HOST2_UUID] | 2025.10.1-2642.6f598c0 | ok     | running      | pf9-ostackhost:2025.10.1-6816                 |
|             |                                           |              |                        |        |              | pf9-neutron-base:2025.10.1-3183               |
|             |                                           |              |                        |        |              | pf9-neutron-ovn-controller:2025.10.1-3183     |
|             |                                           |              |                        |        |              | pf9-neutron-ovn-metadata-agent:2025.10.1-3183 |
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
 SUCCESS  getting host statuses...

Step 2: Upgrade Hosts

To upgrade the hosts in a specific region, execute the following commands on the management cluster node:

NOTE

The upgrade-hosts command times out after 1800 seconds (30 minutes) by default. You can override this default by adding the hostupgradetimeout parameter in the airctl-config.yaml file. To prevent timeouts when upgrading a large number of hosts, increase this value proportionally — for every 50 hosts, add 30 minutes (1800 seconds) to the hostupgradetimeout value.

airctl upgrade-hosts --region <REGION_NAME>

The command triggers the creation of a host-upgrade-xxxx pod in the corresponding region namespace. You can monitor the upgrade progress or verify its success by checking the pod status and logs:

# List the host upgrade pods
kubectl get pods -n <REGION_FQDN> | grep host-upgrade

# View logs of the host upgrade pod
kubectl logs -n <REGION_FQDN> <HOST_UPGRADE_POD_NAME>

Here is a sample output:

host-upgrade-1747919688-g5rk5   5/5   Completed   0   83m
kubectl logs -n example-onprem-region1 host-upgrade-1747919688-g5rk5

If a host fails to upgrade, run the following command to rerun the upgrade that the user executes using IP.

airctl upgrade-hosts --region <REGION_NAME> --host-ips "<HOST_IP>"

Step 3: Verify Upgrade Status of the Hosts

After the upgrade, confirm that the host packages have been successfully updated by running the following command.

$ airctl host-status --config /opt/pf9/airctl/conf/airctl-config.yaml
Getting host statuses...                                                                                                                                                                                                                                                      
Getting host statuses for region: [REGION_NAME]                                                                                                                                                                                                                                         
┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐                                                   
| Host Name   | IP Addresses                              | Host ID      | Host Agent Version     | Status | Agent Status | Apps                                          |
| [HOSTNAME1] | 172.29.32.34, 192.168.122.1, 10.0.11.10   | [HOST1_UUID] | 2025.10.1-2642.6f598c0 | ok     | running      | pf9-cindervolume-config:2025.10.1-2595        |
|             |                                           |              |                        |        |              | pf9-cindervolume-base:2025.10.1-2595          |
|             |                                           |              |                        |        |              | pf9-neutron-ovn-controller:2025.10.1-3183     |
|             |                                           |              |                        |        |              | pf9-ostackhost:2025.10.1-6816                 |
|             |                                           |              |                        |        |              | pf9-glance-role:2025.10.1-3266                |
|             |                                           |              |                        |        |              | pf9-neutron-ovn-metadata-agent:2025.10.1-3183 |
|             |                                           |              |                        |        |              | pf9-neutron-base:2025.10.1-3183               |
| [HOSTNAME2] | 172.29.32.110, 192.168.122.1, 10.0.11.102 | [HOST2_UUID] | 2025.10.1-2642.6f598c0 | ok     | running      | pf9-ostackhost:2025.10.1-6816                 |
|             |                                           |              |                        |        |              | pf9-neutron-base:2025.10.1-3183               |
|             |                                           |              |                        |        |              | pf9-neutron-ovn-controller:2025.10.1-3183     |
|             |                                           |              |                        |        |              | pf9-neutron-ovn-metadata-agent:2025.10.1-3183 |
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
 SUCCESS  getting host statuses...

Compare the output with the one recorded before the upgrade to ensure the packages were updated as expected.

Recovery: Upgrade Failure

Step 1. Stop the Failed Management Cluster

airctl stop --verbose

This command shuts down all components of the existing management cluster. --verbose helps verify the detailed output of all stopped services.

Step 2. Delete the Management Cluster Configuration

airctl unconfigure-du --force --verbose

This command removes all existing configuration files and metadata associated with the management cluster. Using --force enables overriding any locked or incomplete states.

Step 3: Delete the Existing Management Cluster

airctl delete-cluster --verbose

This command permanently deletes the management cluster resources. Ensure that you have backed up all critical data before running this command

Step 4. Create a New Management Cluster

airctl --config /opt/pf9/airctl/conf/airctl-config.yaml create-cluster --verbose

The command creates a new management cluster with the same configuration as before

Step 5. Restore from the Backup

airctl restore --backupdir <BACKUP_DIRECTORY> --verbose

Replace <BACKUP_DIRECTORY>with the actual path of your stored backup. The command restores the environment from the backup.

Last updated

Was this helpful?