# Backup and Restore Management Plane

This guide provides steps for backing up and restoring the self-hosted <code class="expression">space.vars.product\_name</code> management plane in disaster recovery scenarios. The procedures include both manual and automated backup methods, as well as manual restoration process.

{% hint style="info" %}
**Info**

When restoring the management plane, ensure it's done on a Kubernetes cluster that is separate from the cluster where the backup was generated.
{% endhint %}

## Prerequisites

#### System Requirements

* Access to the Kubernetes management cluster
* Installed and configured `airctl` binary
* Valid `airctl` configuration file at `/opt/pf9/airctl/conf/airctl-config.yaml`
* Root or sudo access to the management node

#### For S3 Backup Storage

* AWS credentials with S3 bucket access
* Existing S3 bucket for backup storage
* AWS CLI configured (for verification purposes)

## Important Considerations

1. The restoration process must be performed on a separate Kubernetes management cluster that is different from the management cluster where the backup was generated.
2. The metrics service (gnocci) data is not backed up in the backup procedure. For complete disaster recovery, manual copying of metrics service (gnocchi) metrics from the original storage class `pcd-sc` persistent volume is required.

## Manual Backup Procedure

Create a backup directory:

{% tabs %}
{% tab title="Bash" %}

```bash
mkdir -p /tmp/backup-mgmt/
```

{% endtab %}
{% endtabs %}

Execute the airctl backup command:

{% hint style="info" %}
**Info**

Execute the following command as a non-root user.
{% endhint %}

{% tabs %}
{% tab title="Bash" %}

```bash
airctl backup --outdir /tmp/backup-mgmt/ --config /opt/pf9/airctl/conf/airctl-config.yaml --verbose
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**Info**

Use `--region <region_name>` parameter if you intend to back up only a specific region. If not specified, all the regions will be included in the backup.
{% endhint %}

Verify backup contents:

{% tabs %}
{% tab title="Bash" %}

```bash
tar tvf /tmp/backup-mgmt/backup.tar.gz
```

{% endtab %}
{% endtabs %}

The backup archive should contain:

* `state_backup.yaml`: System state configuration
* `kplane_values_backup.yaml`: Kubernetes management cluster configuration
* `consul.snap`: Consul snapshot
* `mysql_dump_Infra.sql`: Infrastructure database backup
* `mysql_dump_Region1.sql`: Region-specific database backup
* `ovn-north-backup` & `ovn-south-backup` : Ovn database backup

{% hint style="warning" %}
**Warning**

Metrics service (gnocci) persistent data will not be backed up or restored as part of the above procedure. Therefore, in a full disaster recovery scenario, you must manually copy the metrics service (gnocchi) metrics data from the original storage class `pcd-sc` persistent volume.
{% endhint %}

## Automated Backup Configuration

The automated backup system is created during the initial [installation](https://docs.platform9.com/private-cloud-director/2025.4/getting-started/self-hosted-install#install-management-plane) of the <code class="expression">space.vars.product\_name</code> management plane. When you run the installation command, the system automatically creates a `service` named `airctl-backup`. This service is configured to run `hourly` to ensure regular system backups.

Backups are stored at path `/var/pf9/backups/` on the node with `airctl` installation

You can verify the service status using:

{% tabs %}
{% tab title="Bash" %}

```bash
$ systemctl status airctl-backup
```

{% endtab %}
{% endtabs %}

The output should show that the `airctl-backup` service is inactive by default and becomes active only during a backup operation.

#### Configuring S3 Backup Storage

To enable storing backups in an S3 bucket, you need to create and configure a credentials file.

Create the file `/etc/default/airctl-backup` with the following AWS parameters:

{% tabs %}
{% tab title="Bash" %}

```bash
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=your_aws_region
AWS_S3_PATH=s3://your-bucket-name/path/
```

{% endtab %}
{% endtabs %}

The file should be owned by the user running the `airctl-backup` service and have appropriate permissions (typically 600).

Once configured, backups will be stored both locally and in the specified S3 bucket location.

## Manual Restore Procedure

#### Standard Restore

Execute the restore command:

{% tabs %}
{% tab title="Bash" %}

```bash
airctl restore --backupdir /root/backup-mgmt --config /opt/pf9/airctl/conf/airctl-config.yaml --region <region_name> --verbose
```

{% endtab %}
{% endtabs %}

`--region` is optional; specify it only when restoring a specific region.

#### Restore from S3 Backup

Create and configure the `/etc/default/airctl-backup` file with required AWS parameters, making sure that `AWS_S3_PATH` points specifically to the backup file you want to restore, not just the S3 bucket:

{% tabs %}
{% tab title="YAML" %}

```yaml
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=your_aws_region
AWS_S3_PATH=s3://your-bucket-name/path/specific-backup-file
```

{% endtab %}
{% endtabs %}

Execute the S3 restore command:

{% tabs %}
{% tab title="Bash" %}

```bash
airctl restore --s3backup --config /opt/pf9/airctl/conf/airctl-config.yaml --verbose
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**Info**

For complete disaster recovery, manually restore Gnocchi metrics data from the original `pcd-sc`persistent volume
{% endhint %}

## Verification Steps

Check backup file integrity using MD5 checksum::

{% tabs %}
{% tab title="Bash" %}

```bash
# Generate MD5 checksum for the backup file
md5sum <backup-file>.tar.gz

# Optional: Compare with a pre-recorded checksum
# You can save the MD5 checksum when initially creating the backup
md5sum /root/backup-mgmt/backup.tar.gz > backup-checksum.txt

# Later, verify the backup file matches the original checksum
md5sum -c backup-checksum.txt
```

{% endtab %}
{% endtabs %}

Verify S3 uploads (if configured):

{% tabs %}
{% tab title="Bash" %}

```bash
aws s3 ls s3://<bucket-name>/<backup-path>
```

{% endtab %}
{% endtabs %}

Monitor restore progress:

{% tabs %}
{% tab title="Bash" %}

```bash
kubectl logs -f <restore-pod-name>
```

{% endtab %}
{% endtabs %}

## Common Issues

* If AWS credentials are not properly configured, automated S3 backups will continue locally but skip S3 upload
* Restore operations may take significant time depending on data volume
* Services may take additional time to start after restore completion
