# Veeam Integration with PCD

{% hint style="info" %}
**NOTE**

This integration is currently in beta. Production use is not recommended at this time. To share feedback, contact Platform9 support.
{% endhint %}

Veeam Backup and Replication provides agentless backup, recovery, and disaster recovery for virtual machines running in Private Cloud Director (PCD). This guide walks you through deploying the PCD Veeam Proxy (PVP) appliance and configuring Veeam to discover and protect your VMs.

{% hint style="info" %}
**NOTE**

During the beta, PCD appears as **oVirt KVM Manager** in the Veeam interface. This will change when the integration reaches general availability.
{% endhint %}

### How it works

<figure><img src="https://1649501270-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSNWOoFOMzRblbHdwmlrR%2Fuploads%2FM3qrPAFVh3BBPfKZ92PW%2Fimage.png?alt=media&#x26;token=83005f00-c1a0-4198-a1d0-810c5576e8e0" alt=""><figcaption></figcaption></figure>

Architecture diagram showing the Veeam Backup Server connecting to the PCD Veeam Proxy appliance VM, which communicates with the Veeam Worker, other VMs, hypervisors, and the storage array via the PCD Control Plane.

The integration connects four components:

* **PCD environment:** Your workloads run here. The environment includes a control plane (SaaS or self-hosted), hypervisors, and a storage array that provides VM volumes and snapshotting.
* **Veeam Backup & Replication (VBR):** Coordinates backup jobs and manages communication between components. Typically deployed on a dedicated Windows Server.
* **PCD Veeam Proxy (PVP):** An appliance VM that runs on PCD and exposes the Universal Hypervisor APIs (UHAPIs) that Veeam uses to connect with PCD. You deploy this as part of this guide.
* **Veeam Worker:** VMs deployed and managed by Veeam to handle data transfer between PCD and backup repositories. Veeam powers them on and off as needed. See [Managing Workers](https://helpcenter.veeam.com/docs/vbr/userguide/ovirt_workers.html?ver=13) in the Veeam documentation.

### Prerequisites

Before you begin, ensure your environment meets the following requirements.

#### Veeam

* Veeam Backup and Replication version **13.0.1.180** or later, deployed with a configured storage repository.
* The Veeam server must be reachable from the PCD environment.

#### PCD environment

* PCD version **2025.10-180** or later.
* A user with Administrator access in the target tenant or project.
* Access to run `pcdctl` commands.

#### Network

* The PVP appliance and worker VMs must be on the same network as the Veeam server, or routing must be configured between them.
* The PCD network used for the PVP appliance and workers must have DHCP enabled.
* A security group named `pcd-veeam-proxy-sg` must exist, with all inbound and outbound traffic to and from the Veeam server permitted.

#### Storage

* A Cinder storage backend that supports snapshot creation.
* Sufficient storage quota in the tenant to accommodate snapshots. Snapshots count against the tenant quota.

### Step 1: Download the PVP appliance image

* [Download](https://pcd-ovirt-proxy.s3.us-west-2.amazonaws.com/latest/pcd-veeam-proxy.qcow2) the latest PVP appliance image.
* To verify the downloaded version, check the [version file](https://pcd-ovirt-proxy.s3.us-west-2.amazonaws.com/latest/version.tag).

### Step 2: Upload the image to PCD

You can upload the image using the PCD UI or the CLI.

* **Using the UI:** Navigate to **Virtual Machines > Images** (or **Images and VM Snapshots**) and select **Add Image**. Mark the image as **public** so it can be reused across tenants.
* **Using the CLI:**<br>

  ```bash
  pcdctl image create --insecure \
    --container-format bare --disk-format qcow2 --public \
    --file pcd-veeam-proxy.qcow2 pvp-appliance
  ```

Marking the image as public lets you reuse it when onboarding additional tenants.

### Step 3: Create the appliance VM

Deploy a new VM using the uploaded image with the following configuration:

* **Flavor:** `m1.xlarge` (minimum 8 vCPU, 16 GB RAM)
* **Boot source:** New volume, 50 GB or larger
* **Network:** A network with connectivity to the Veeam server
* **Security group:** `pcd-veeam-proxy-sg`

{% hint style="info" %}
**NOTE**

**Sizing and concurrent operations:** The `m1.xlarge` flavor supports up to 10 concurrent backup or restore operations. To increase this limit, add 1 vCPU and 1 GB RAM for every 2 additional concurrent operations before adjusting Veeam worker settings.
{% endhint %}

### Step 4: Configure the appliance VM

1. SSH into the PVP VM using the IP address shown in the PCD UI.<br>

   ```bash
   ssh ubuntu@<IP of PVP VM>
   ```

   If a password was not set via cloud-init, the default is `password`. You will be prompted to change it on first login.
2. Monitor the initialization log and wait for `Setup completed successfully.` before proceeding:<br>

   ```bash
   tail -f /var/log/pf9-install.log
   ```
3. In the PCD UI, navigate to **Gear icon (top right) > API Access > pcdctl RC** and copy the RC file contents. Save them to a local file and fill in your password. The user in the RC file must have Administrator access in the tenant.
4. If you are connecting to a self-hosted PCD management plane, add the following line to the RC file:<br>

   ```bash
   export OS_VERIFY=False
   ```
5. Copy the RC file to the appliance VM:<br>

   ```bash
   scp pcd.rc ubuntu@<IP of PVP VM>:~
   ```
6. SSH back into the VM and run the configuration command:<br>

   ```bash
   pvp-configure pcd.rc
   ```

If you are using a self-hosted PCD management plane and the hostname does not resolve, edit `/etc/hosts` on the appliance VM to add the appropriate DNS entry before running this command.

### Step 5: Add PCD to Veeam

1. In Veeam, navigate to **Inventory > Virtual Infrastructure > oVirt KVM** and select **Add Manager**.
2. In the **DNS name or IP address** field, enter the IP address of the PVP appliance with no prefix or suffix. For example: `10.10.5.214`.

Here is an example.

**New oVirt KVM Manager** wizard showing the **Name**, with the PVP appliance IP address entered in the DNS name or IP address field.

<figure><img src="https://1649501270-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSNWOoFOMzRblbHdwmlrR%2Fuploads%2FepocPVqv5DSC5uUz2HBd%2Fimage.png?alt=media&#x26;token=a47bcc06-704a-45e9-80fb-4ce3ff08a875" alt=""><figcaption></figcaption></figure>

3. Continue to **Credentials**, enter the same username and password used in the `pcdctl` RC file.
4. Accept the certificate.
5. Complete the wizard.

Here is an example.

**New oVirt KVM Manager** wizard showing the **Apply** step with two successful steps: oVirt KVM Virtualization Manager registered and entity list refreshed.

<figure><img src="https://1649501270-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSNWOoFOMzRblbHdwmlrR%2Fuploads%2F6HCkOjSe4XcYPFvtHGyL%2Fimage.png?alt=media&#x26;token=1530f1a0-05b7-43de-8ad0-599e8d59f238" alt=""><figcaption></figcaption></figure>

### Step 6: Create a Veeam worker

After adding the oVirt KVM manager, Veeam prompts you to create a worker VM.

1. Select **Yes**\
   Here is an example.\
   \
   Veeam Backup and Replication dialog prompting the user to deploy a worker VM on the oVirt KVM server at 10.10.5.214.

<figure><img src="https://1649501270-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSNWOoFOMzRblbHdwmlrR%2Fuploads%2FREBC7qMbxDA2FqW4Wgcm%2Fimage.png?alt=media&#x26;token=7fd60baa-1fa0-4d05-9ff2-76984c03645e" alt=""><figcaption></figcaption></figure>

2. Specify the cluster, name, and storage options.
3. Set **Max concurrent tasks**. For an `m1.xlarge` PVP appliance, this value must not exceed 10.
4. In **Advanced settings**, configure the worker's CPU and memory. The default is 6 vCPU and 6 GB RAM.
5. On the **Networks** screen, select the same PCD subnet used for the PVP appliance.

{% hint style="info" %}
**NOTE**

**Worker sizing and concurrent task limits**: The number of concurrent tasks is governed by the Veeam worker setting, but is ultimately capped by the PVP appliance size. For the recommended `m1.xlarge` appliance, the hard limit is 10 concurrent tasks.

To increase the limit, scale the PVP appliance first (1 vCPU and 1 GB RAM per 2 additional operations), then scale the worker VM (1 vCPU and 1 GB RAM per additional concurrent task).

See [Adding backup proxies for oVirt KVM](https://helpcenter.veeam.com/docs/vbr/userguide/ovirt_workers_add_byb.html?ver=13) in the Veeam documentation.
{% endhint %}

6. Select **Finish** to start worker deployment.

This process takes 15–20 minutes.

If the deployment dialog does not appear, navigate to **Veeam > History > System** to find the running job. Deployed workers appear under **Backup Infrastructure > Backup Proxies**.

Here is an example.

Veeam System log showing successful worker deployment steps, including image upload, VM deployment, power on, IP assignment, and service connection.

<figure><img src="https://1649501270-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSNWOoFOMzRblbHdwmlrR%2Fuploads%2FcZJ5uN6mie17xh6432Ey%2Fimage.png?alt=media&#x26;token=74beda8a-329a-4d6b-ae6c-88bf202deb2c" alt=""><figcaption></figcaption></figure>

For more information, see [Managing Workers](https://helpcenter.veeam.com/docs/vbr/userguide/ovirt_workers.html?ver=13) in the Veeam documentation.

### Back up VMs

1. Navigate to **Veeam > Inventory > oVirt KVM** and select the PVP appliance IP.
2. Select one or more VMs and choose **Add to Backup Job**.
3. Configure the job name, VMs, backup destination, and schedule, then select **Finish**.
4. To run the job immediately, enable **Run the job when I click Finish** before selecting Finish.

To monitor progress, navigate to **Veeam > History > Jobs > Backups**.

Here is an example.

Veeam backup job progress screen showing a completed backup of two VMs (jammy-test-01 and jammy-test-02) with a success status and throughput summary

<figure><img src="https://1649501270-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSNWOoFOMzRblbHdwmlrR%2Fuploads%2FfAe5ZiyVNoV8qf0r0niM%2Fimage.png?alt=media&#x26;token=fde738bb-f7cf-4837-8629-bb132a37b386" alt=""><figcaption></figcaption></figure>

For more information, see [Performing backup](https://helpcenter.veeam.com/docs/vbr/userguide/ovirt_data_protection.html?ver=13) in the Veeam documentation.

### Restore VMs

1. Navigate to **Veeam > Home > Backups > Disk**.
2. Expand the job, right-click the VM you want to restore, and select **Restore entire VM > oVirt KVM**.
3. Choose a restore destination:
   * **Restore to the original location:** Deletes the existing VM and creates a new restored VM in its place.
   * **Restore to a new location:** Creates a new VM with options to select cluster, storage, name, and network.
4. Enable the **Power on target VM after restoring** toggle.
5. Select **Finish**.

{% hint style="warning" %}
**WARNING**

You must enable **Power on the target VM after restoring** before selecting **Finish**. If this toggle is off, the restored VM will not be created. This option will be enabled by default in a future release.
{% endhint %}

To monitor progress, navigate to **Veeam > History > Restore > Full VM Restore**.

Here is an example.

Veeam restore session log showing a completed full VM restore with steps including restore point found, VM creation, disk restore, and VM power.

<figure><img src="https://1649501270-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FSNWOoFOMzRblbHdwmlrR%2Fuploads%2FSiY02KS7kF81dSvMHeuS%2Fimage.png?alt=media&#x26;token=1a74b566-d3ed-452c-9427-fa8e6ec8a432" alt=""><figcaption></figcaption></figure>

For more information, see [Performing restore](https://helpcenter.veeam.com/docs/vbr/userguide/ovirt_data_recovery.html?ver=13) in the Veeam documentation.

### Add more PCD tenants

To protect VMs in additional PCD tenants, deploy a separate PVP appliance in each tenant following Steps 1–5, then add each tenant as a separate oVirt KVM Manager in Veeam. Cross-tenant restores are supported once a PVP appliance is onboarded.

### Limitations

The current beta release has the following limitations:

* Only VMs with Cinder-based volumes are supported. Image-backed VMs are not supported.
* The PVP appliance and worker VM cannot be deployed on networks where DHCP is disabled.
* The Host affinity option in the Veeam worker deployment Advanced settings is not supported.
* Layer 2 Networks (introduced in PCD 2026.1) are not supported.
* VMs using hotplug flavors are currently not supported.
* A VM's original security group is not preserved on restore. Restored VMs use `pcd-veeam-proxy-sg` when port security is enabled.
* PVP appliances must be deployed manually per tenant. This will be automated in a future release.
* `HTTP_PROXY` environment variables are not supported.
* VM tag backup and restore are not supported.
* The root volume of a VM cannot be restored using single-disk restore.

### Troubleshooting

#### Log locations

Logs are stored on the PVP appliance VM at the following paths:

<table><thead><tr><th width="219.5999755859375">Log</th><th>Path</th></tr></thead><tbody><tr><td>Appliance setup</td><td><code>/var/log/pf9-install.log</code></td></tr><tr><td>PVP runtime</td><td><code>/var/log/pf9/proxy.log</code></td></tr><tr><td>Image transfer</td><td><code>/var/log/pf9/image-transfer-&#x3C;id>/</code></td></tr><tr><td>Image transfer events</td><td><code>/var/log/pf9/image-transfer-events/transfer-&#x3C;id>.events.log</code></td></tr></tbody></table>

#### Generate a support bundle

When contacting Platform9 support, include a support bundle so the team has the logs needed to investigate.

To generate the bundle, SSH into the PVP appliance and run:

```shell
pvp-support-bundle
```

The command creates a compressed `.tar.gz` file in the current directory. Share this file with Platform9 support.

#### PVP appliance setup failure

If `/var/log/pf9-install.log` shows an error, retry setup with:

```shell
sudo -E /etc/pf9/install.sh
```

If the error persists, generate a support bundle and contact Platform9 support.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.platform9.com/private-cloud-director/integrations/veeam-integration-with-pcd.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
