Veeam Integration with PCD

Configure Veeam Backup and Replication to back up and restore virtual machines running in Private Cloud Director using the PCD Veeam Proxy appliance.

circle-info

NOTE

This integration is currently in beta. Production use is not recommended at this time. To share feedback, contact Platform9 support.

Veeam Backup and Replication provides agentless backup, recovery, and disaster recovery for virtual machines running in Private Cloud Director (PCD). This guide walks you through deploying the PCD Veeam Proxy (PVP) appliance and configuring Veeam to discover and protect your VMs.

circle-info

NOTE

During the beta, PCD appears as oVirt KVM Manager in the Veeam interface. This will change when the integration reaches general availability.

How it works

Architecture diagram showing the Veeam Backup Server connecting to the PCD Veeam Proxy appliance VM, which communicates with the Veeam Worker, other VMs, hypervisors, and the storage array via the PCD Control Plane.

The integration connects four components:

  • PCD environment: Your workloads run here. The environment includes a control plane (SaaS or self-hosted), hypervisors, and a storage array that provides VM volumes and snapshotting.

  • Veeam Backup & Replication (VBR): Coordinates backup jobs and manages communication between components. Typically deployed on a dedicated Windows Server.

  • PCD Veeam Proxy (PVP): An appliance VM that runs on PCD and exposes the Universal Hypervisor APIs (UHAPIs) that Veeam uses to connect with PCD. You deploy this as part of this guide.

  • Veeam Worker: VMs deployed and managed by Veeam to handle data transfer between PCD and backup repositories. Veeam powers them on and off as needed. See Managing Workersarrow-up-right in the Veeam documentation.

Prerequisites

Before you begin, ensure your environment meets the following requirements.

Veeam

  • Veeam Backup and Replication version 13.0.1.180 or later, deployed with a configured storage repository.

  • The Veeam server must be reachable from the PCD environment.

PCD environment

  • PCD version 2025.10-180 or later.

  • A user with Administrator access in the target tenant or project.

  • Access to run pcdctl commands.

Network

  • The PVP appliance and worker VMs must be on the same network as the Veeam server, or routing must be configured between them.

  • The PCD network used for the PVP appliance and workers must have DHCP enabled.

  • A security group named pcd-veeam-proxy-sg must exist, with all inbound and outbound traffic to and from the Veeam server permitted.

Storage

  • A Cinder storage backend that supports snapshot creation.

  • Sufficient storage quota in the tenant to accommodate snapshots. Snapshots count against the tenant quota.

Step 1: Download the PVP appliance image

Step 2: Upload the image to PCD

You can upload the image using the PCD UI or the CLI.

  • Using the UI: Navigate to Virtual Machines > Images (or Images and VM Snapshots) and select Add Image. Mark the image as public so it can be reused across tenants.

  • Using the CLI:

Marking the image as public lets you reuse it when onboarding additional tenants.

Step 3: Create the appliance VM

Deploy a new VM using the uploaded image with the following configuration:

  • Flavor: m1.xlarge (minimum 8 vCPU, 16 GB RAM)

  • Boot source: New volume, 50 GB or larger

  • Network: A network with connectivity to the Veeam server

  • Security group: pcd-veeam-proxy-sg

circle-info

NOTE

Sizing and concurrent operations: The m1.xlarge flavor supports up to 10 concurrent backup or restore operations. To increase this limit, add 1 vCPU and 1 GB RAM for every 2 additional concurrent operations before adjusting Veeam worker settings.

Step 4: Configure the appliance VM

  1. SSH into the PVP VM using the IP address shown in the PCD UI.

    If a password was not set via cloud-init, the default is password. You will be prompted to change it on first login.

  2. Monitor the initialization log and wait for Setup completed successfully. before proceeding:

  3. In the PCD UI, navigate to Gear icon (top right) > API Access > pcdctl RC and copy the RC file contents. Save them to a local file and fill in your password. The user in the RC file must have Administrator access in the tenant.

  4. If you are connecting to a self-hosted PCD management plane, add the following line to the RC file:

  5. Copy the RC file to the appliance VM:

  6. SSH back into the VM and run the configuration command:

If you are using a self-hosted PCD management plane and the hostname does not resolve, edit /etc/hosts on the appliance VM to add the appropriate DNS entry before running this command.

Step 5: Add PCD to Veeam

  1. In Veeam, navigate to Inventory > Virtual Infrastructure > oVirt KVM and select Add Manager.

  2. In the DNS name or IP address field, enter the IP address of the PVP appliance with no prefix or suffix. For example: 10.10.5.214.

Here is an example.

New oVirt KVM Manager wizard showing the Name, with the PVP appliance IP address entered in the DNS name or IP address field.

  1. Continue to Credentials, enter the same username and password used in the pcdctl RC file.

  2. Accept the certificate.

  3. Complete the wizard.

Here is an example.

New oVirt KVM Manager wizard showing the Apply step with two successful steps: oVirt KVM Virtualization Manager registered and entity list refreshed.

Step 6: Create a Veeam worker

After adding the oVirt KVM manager, Veeam prompts you to create a worker VM.

  1. Select Yes Here is an example. Veeam Backup and Replication dialog prompting the user to deploy a worker VM on the oVirt KVM server at 10.10.5.214.

  1. Specify the cluster, name, and storage options.

  2. Set Max concurrent tasks. For an m1.xlarge PVP appliance, this value must not exceed 10.

  3. In Advanced settings, configure the worker's CPU and memory. The default is 6 vCPU and 6 GB RAM.

  4. On the Networks screen, select the same PCD subnet used for the PVP appliance.

circle-info

NOTE

Worker sizing and concurrent task limits: The number of concurrent tasks is governed by the Veeam worker setting, but is ultimately capped by the PVP appliance size. For the recommended m1.xlarge appliance, the hard limit is 10 concurrent tasks.

To increase the limit, scale the PVP appliance first (1 vCPU and 1 GB RAM per 2 additional operations), then scale the worker VM (1 vCPU and 1 GB RAM per additional concurrent task).

See Adding backup proxies for oVirt KVMarrow-up-right in the Veeam documentation.

  1. Select Finish to start worker deployment.

This process takes 15–20 minutes.

If the deployment dialog does not appear, navigate to Veeam > History > System to find the running job. Deployed workers appear under Backup Infrastructure > Backup Proxies.

Here is an example.

Veeam System log showing successful worker deployment steps, including image upload, VM deployment, power on, IP assignment, and service connection.

For more information, see Managing Workersarrow-up-right in the Veeam documentation.

Back up VMs

  1. Navigate to Veeam > Inventory > oVirt KVM and select the PVP appliance IP.

  2. Select one or more VMs and choose Add to Backup Job.

  3. Configure the job name, VMs, backup destination, and schedule, then select Finish.

  4. To run the job immediately, enable Run the job when I click Finish before selecting Finish.

To monitor progress, navigate to Veeam > History > Jobs > Backups.

Here is an example.

Veeam backup job progress screen showing a completed backup of two VMs (jammy-test-01 and jammy-test-02) with a success status and throughput summary

For more information, see Performing backuparrow-up-right in the Veeam documentation.

Restore VMs

  1. Navigate to Veeam > Home > Backups > Disk.

  2. Expand the job, right-click the VM you want to restore, and select Restore entire VM > oVirt KVM.

  3. Choose a restore destination:

    • Restore to the original location: Deletes the existing VM and creates a new restored VM in its place.

    • Restore to a new location: Creates a new VM with options to select cluster, storage, name, and network.

  4. Enable the Power on target VM after restoring toggle.

  5. Select Finish.

circle-exclamation

To monitor progress, navigate to Veeam > History > Restore > Full VM Restore.

Here is an example.

Veeam restore session log showing a completed full VM restore with steps including restore point found, VM creation, disk restore, and VM power.

For more information, see Performing restorearrow-up-right in the Veeam documentation.

Add more PCD tenants

To protect VMs in additional PCD tenants, deploy a separate PVP appliance in each tenant following Steps 1–5, then add each tenant as a separate oVirt KVM Manager in Veeam. Cross-tenant restores are supported once a PVP appliance is onboarded.

Limitations

The current beta release has the following limitations:

  • Only VMs with Cinder-based volumes are supported. Image-backed VMs are not supported.

  • PVP appliances must be deployed manually per tenant. This will be automated in a future release.

  • HTTP_PROXY environment variables are not supported.

  • VM tag backup and restore are not supported.

  • Single-disk restores are not supported. Use a full VM restore instead.

  • The root volume of a VM cannot be restored using single-disk restore.

Troubleshooting

Log locations

Logs are stored on the PVP appliance VM at the following paths:

Log
Path

Appliance setup

/var/log/pf9-install.log

PVP runtime

/var/log/pf9/proxy.log

Image transfer

/var/log/pf9/image-transfer-<id>/

Image transfer events

/var/log/pf9/image-transfer-events/transfer-<id>.events.log

Generate a support bundle

When contacting Platform9 support, include a support bundle so the team has the logs needed to investigate.

To generate the bundle, SSH into the PVP appliance and run:

The command creates a compressed .tar.gz file in the current directory. Share this file with Platform9 support.

PVP appliance setup failure

If /var/log/pf9-install.log shows an error, retry setup with:

If the error persists, generate a support bundle and contact Platform9 support.

Last updated

Was this helpful?