# Install

This guide outlines the steps for a self-hosted deployment of <code class="expression">space.vars.product\_name</code>. Before installing, refer to the [Pre-requisites](https://docs.platform9.com/private-cloud-director/2025.10/getting-started/pre-requisites) section to ensure all required prerequisites are met.

## Concepts

**Management Cluster**

As part of the installation process, the Self-Hosted version of <code class="expression">space.vars.product\_name</code> creates a Kubernetes cluster using the physical servers that you use to deploy it on. We refer to this cluster as the **management cluster**. The <code class="expression">space.vars.product\_name</code> **management plane** then runs as a set of Kubernetes pods and services on this management cluster.

**Nodelet**

Nodelet is a software agent that is installed and run on each management cluster node as a component of <code class="expression">space.vars.self\_hosted\_product\_name</code> . The nodelet agent is responsible for functions such as installation and configuration of multiple Kubernetes services including etcd, containerd, docker, networking, webhooks etc.

**Infra Region vs Workload Regions**

Read [Tenant](https://docs.platform9.com/private-cloud-director/2025.10/identity-and-multi-tenancy/tenant) to understand the concepts of regions and `infra` region in <code class="expression">space.vars.product\_name</code>.

## Download Installer

`airctl` is the command-line installer for <code class="expression">space.vars.self\_hosted\_product\_name</code>. Run the following commands only on one of the management cluster hosts to download `airctl` along with the required installer artifacts.

All the following steps should be performed by a non-root user.

#### Step 1: Download the Installer Script

Run the following command to download the installer script and required artifacts into your home folder:

{% tabs %}
{% tab title="Bash" %}

```bash
curl --user-agent "<YOUR_USER_AGENT_KEY>" https://pf9-airctl.s3-accelerate.amazonaws.com/v-2025.10.1-4204351/index.txt | awk '{print "curl -sS --user-agent \"<YOUR_USER_AGENT_KEY>\" \"https://pf9-airctl.s3-accelerate.amazonaws.com/v-2025.10.1-4204351/" $NF "\" -o ${HOME}/" $NF}' | bash
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**NOTE**

Replace `YOUR_USER_AGENT_KEY` in the command with the user agent key you requested from Platform9. For more details see [Pre-requisites](https://docs.platform9.com/private-cloud-director/2025.10/getting-started/pre-requisites)
{% endhint %}

#### Step 2: Make the Installer Executable

Set the execute permissions on the installation script.

{% tabs %}
{% tab title="Bash" %}

```bash
chmod +x ./install-pcd.sh
```

{% endtab %}
{% endtabs %}

#### Step 3: Run the Installation Script

Execute the installer with the specified version. This runs the installer using the version number found in `version.txt`.

{% tabs %}
{% tab title="Bash" %}

```bash
./install-pcd.sh `cat version.txt`
```

{% endtab %}
{% endtabs %}

#### Step 4: Add airctl to System Path

Add `airctl` to the system path to use it globally by creating a symlink in `/usr/bin` folder.

{% tabs %}
{% tab title="Bash" %}

```bash
sudo ln -s /opt/pf9/airctl/airctl /usr/bin/airctl
```

{% endtab %}
{% endtabs %}

### Configure airctl

Run the following command to generate a configuration file, which will be used to deploy the <code class="expression">space.vars.self\_hosted\_product\_name</code> management cluster.

You can choose between a `single-master` or `multi-master` management cluster, depending on your installation type (POC or production).

{% tabs %}
{% tab title="Bash" %}

```bash
/opt/pf9/airctl/airctl configure --du-fqdn pcd.platform9.localnet --external-ip4 10.149.106.15 --ipv4-enabled --master-ips 10.149.106.11,10.149.106.12,10.149.106.13 --master-vip-interface ens3 --master-vip4 10.149.106.16 --storage-provider custom --regions Region1 --worker-ips 10.149.106.14
```

{% endtab %}
{% endtabs %}

Following are the input parameters for the command:

1. **du-fqdn -** Specify the base FQDN you would like to use for product\_name. For eg `pcd.mycompany.com`
2. **external-ip4 -** Specify the VIP to be used for the management plane here.
3. **ipv4-enabled** - Enable the IPv4 networking for the cluster.
4. **master-ips** *-* Specify a comma separated list of IP addresses for the master nodes you'd like to use for the management cluster. We recommend minimum 3 master nodes for a production environment. These nodes act as worker nodes as well.
5. **worker-ips (optional)** - If you'd like to add worker nodes to the management cluster, then specify the comma separated list of IP addresses for worker nodes here.
6. **master-vip-interface** - Specify the name of the network interface to be used for the Virtual IP of master nodes. Note that each master node must have it's default network interface named with this name.
7. **master-vip4** - Specify the Virtual IP to be used for the management cluster. This will be used to serve the management Kubernetes cluster's API server.
8. **master-vip-vrouterid** - This is optional. If unspecified, one will randomly be generated and can be found in the updated cluster spec saved in the directory that contains the state of the management server (see 'Important Directories' section for directory location). It is recommended to specify one if you plan to deploy multiple Kubernetes clusters in the same VLAN to avoid collision.
9. **regions** - Specify one or more region names as a space-separated list for regions you would like to create in your <code class="expression">space.vars.product\_name</code> setup. When specifying more than one regions the list needs to be enclosed in "". The final FQDN for your <code class="expression">space.vars.product\_name</code> deployment will use a combination of your base FQDN and your region name. For example, if your base FQDN is `pcd.mycompany.com` and you specified a single region name here as region1, the final FQDN for your deployment will be `pcd-region1.mycompany.com`.
10. **storage-provider** - Specify the CSI storage provider that should be used to store any persistent state for the management cluster. If not specified, `hostpath-provisioner` storage provider option will be selected as default.
    1. For `custom` as the storage provider, copy provider specific CSI yaml files to `/opt/pf9/airctl/conf/ddu/storage/custom/` locally. `airctl` will configure the dependencies reading the yaml files at this path and use the provided storage class for provisioning volumes for <code class="expression">space.vars.product\_name</code> components. The `StorageClass` Kubernetes Custom Resource supplied here will be renamed to be called `pcd-sc` and set as the default storage class on your management cluster.
    2. Alternatively, you can create the storage provider resources and storage class out of band prior to running `airctl start` on the management cluster. **Ensure that the storage class is named `pcd-sc` in this case.** If this is done, `airctl` will skip reading the yaml files at custom path. We also recommend creating a test pod with a persistent volume to ensure correct connectivity and configuration is in place first.
    3. For non-production environments and where custom storage provider is not available/required, select `hostpath-provisioner`.
11. **nfs-ip,** **nfs-share -** If using hostpath-provisioner as storage provider, NFS server IP and mount location must be provided.
    1. You should have an NFS server pre-configured before selecting this option.

{% hint style="info" %}
**Info**

If you plan to deploy a single-master management cluster (not recommended for production), enter the same master node IP for both VIP for management cluster and VIP for management plane fields above.
{% endhint %}

This command generates two configuration file templates:

* `/opt/pf9/airctl/conf/nodelet-bootstrap-config.yaml` – Contains the configuration required to bootstrap the management cluster.
* `/opt/pf9/airctl/conf/airctl-config.yaml` – Contains the configuration for the management plane.

To avoid passing the configuration file as a command-line option when running `airctl` commands, copy `/opt/pf9/airctl/conf/airctl-config.yaml` to your `$HOME` directory.

{% tabs %}
{% tab title="Bash" %}

```bash
ln -s /opt/pf9/airctl/conf/airctl-config.yaml $HOME/airctl-config.yaml
```

{% endtab %}
{% endtabs %}

### Proxy Configuration (Optional)

If your environment uses a network proxy, set the required values in the `/opt/pf9/airctl/conf/helm_values/kplane.template.yml` file as shown below:

{% tabs %}
{% tab title="Bash" %}

```bash
cat /opt/pf9/airctl/conf/helm_values/kplane.template.yml | grep proxy
# Sample output:
https_proxy: "http://squid.platform9.horse:3128"
http_proxy: "http://squid.platform9.horse:3128"
no_proxy: "10.149.106.11,10.149.106.12,10.149.106.13,10.149.106.14,10.149.106.15,10.149.106.16,127.0.0.1,10.20.0.0/22,localhost,::1,.svc,.svc.cluster.local,10.21.0.0/16,10.20.0.0/16,.cluster.local,.platform9.localnet,.default.svc"
```

{% endtab %}
{% endtabs %}

The list of I.P. addresses in the no\_proxy list should include the master-ips, worker-ips, external-ip4, master-vip4 along with any other addresses for which the traffic should not be routed via proxy server.

Also, to ensure that containerd honors the proxy values and allows the creation of the <code class="expression">space.vars.product\_name</code> management cluster, update the proxy values on all management plane nodes as shown below:

{% tabs %}
{% tab title="Bash" %}

```bash
cat /etc/environment
# Sample output:
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/ssl/bin"
HTTP_PROXY="http://squid.platform9.horse:3128"
https_proxy="http://squid.platform9.horse:3128"
http_proxy="http://squid.platform9.horse:3128"
HTTPS_PROXY="http://squid.platform9.horse:3128"
NO_PROXY="10.149.106.11,10.149.106.12,10.149.106.13,10.149.106.14,10.149.106.15,10.149.106.16,127.0.0.1,10.20.0.0/22,localhost,::1,.svc,.svc.cluster.local,10.21.0.0/16,10.20.0.0/16,.cluster.local,.platform9.localnet,.default.svc"
no_proxy="10.149.106.11,10.149.106.12,10.149.106.13,10.149.106.14,10.149.106.15,10.149.106.16,127.0.0.1,10.20.0.0/22,localhost,::1,.svc,.svc.cluster.local,10.21.0.0/16,10.20.0.0/16,.cluster.local,.platform9.localnet,.default.svc"
```

{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="Bash" %}

```bash
cat /etc/systemd/system/containerd.service.d/http-proxy.conf
# Sample output:
[Service]
EnvironmentFile=/etc/environment
```

{% endtab %}
{% endtabs %}

## Deploy Management Cluster

Next, deploy the management cluster for your <code class="expression">space.vars.product\_name</code> environment.

#### Step 1: Run Pre-Checks

Before creating the management cluster, run the following command to perform pre-checks and resolve any issues identified:

{% tabs %}
{% tab title="Bash" %}

```bash
airctl check
```

{% endtab %}
{% endtabs %}

Optionally, configure AWS Credentials for `airctl` backup

Before creating the cluster, ensure that the AWS credentials required for S3 backup are available by creating the following file.

{% tabs %}
{% tab title="Bash" %}

```bash
Path: /etc/default/airctl-backup
Contents:

AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY>
AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_KEY>
AWS_REGION=<YOUR_AWS_REGION>
AWS_S3_PATH=s3://<YOUR_BUCKET_NAME_/PATH>
```

{% endtab %}
{% endtabs %}

When you create this file before cluster deployment, the system automatically creates a Kubernetes secret named `aws-credentials` in the `pf9-utils` namespace. You need this secret to upload backups to your S3 bucket.

Without this file, you must manually create or patch the `aws-credentials` secret in the `pf9-utils` namespace after cluster creation.

#### Step 2: Deploy the Kubernetes Cluster

Run the following command to deploy the Kubernetes cluster:

{% tabs %}
{% tab title="Bash" %}

```bash
airctl create-cluster --verbose
```

{% endtab %}
{% endtabs %}

#### Step 3: Validate Cluster Health

Once the cluster is created, verify that it is functioning properly and that all nodes are healthy:

{% tabs %}
{% tab title="Bash" %}

```bash
export KUBECONFIG=/etc/nodelet/airctl-mgmt/certs/admin.kubeconfig
kubectl get nodes
# Sample output:
NAME          		  STATUS   ROLES    AGE     VERSION
10.149.106.11       Ready    master   4m29s   v1.29.2
10.149.106.12       Ready    master   5m41s   v1.29.2
10.149.106.13       Ready    master   5m42s   v1.29.2
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**Info**

Please refer to `/var/log/pf9/nodelet.log` for troubleshooting any issues with the management cluster creation.
{% endhint %}

***

## Install Management Plane

Now that the management cluster is created, run the following commands to install and configure the <code class="expression">space.vars.product\_name</code> self-hosted management plane.

{% tabs %}
{% tab title="Bash" %}

```bash
airctl start

# Sample output:
 INFO  pcd-virt management plane creation started
 SUCCESS  generating certs and config...
 SUCCESS  setting up base infrastructure...
▀  starting consul...Secret consul-gossip-encryption-key in namespace default not found, creating new...
 INFO  kplane setup done, creating management plane
 INFO  starting pcd-virt deployment...
 SUCCESS  pcd-virt deployment now complete
 INFO  pcd-virt management plane created - the services will take a while to start
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**Info**

It may take up to 45 minutes for all services to be deployed.
{% endhint %}

### Monitor Deployment Progress

You can track the progress of the management plane deployment by checking the logs of the `du-install` pod as a `root` user:

{% tabs %}
{% tab title="Bash" %}

```bash
export KUBECONFIG=/etc/pf9/kube.d/kubeconfigs/admin.yaml
kubectl get pod -A | grep du-install
kubectl logs -n <ns> <pod name> -f
```

{% endtab %}
{% endtabs %}

Alternatively, to check specific pods:

{% tabs %}
{% tab title="Bash" %}

```bash
kubectl get pods -n foo-kplane | grep du-install
# Sample output:
du-install-foo-bmqqd                        0/1     Completed   0          108m
du-install-foo-region1-f7fdw                0/1     Completed   0          98m
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**Info**

Please refer to `airctl-logs/airctl.log` for logs in case of any issues with `airctl` start command.
{% endhint %}

### Check Management Plane & Region Status

To verify the status of the management plane and its regions, run:

{% tabs %}
{% tab title="Bash" %}

```bash
airctl status
```

{% endtab %}
{% endtabs %}

## Obtain UI Credentials

Once the installation is complete, you can retrieve the credentials for the <code class="expression">space.vars.product\_name</code> UI by running the following command:.

{% tabs %}
{% tab title="Bash" %}

```bash
airctl get-creds
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
**DNS Considerations**

If you do not have a working internal DNS that resolves the management plane FQDN to its IP address, you must manually update the `/etc/hosts` file on your local machine and any new host you want to add to the management cluster.
{% endhint %}

Use the following command to check if the necessary entry exists:

{% tabs %}
{% tab title="Bash" %}

```bash
cat /etc/hosts | grep foo
<VIP for externalIP>         foo-region1.bar.io
<VIP for externalIP>         foo.bar.io
<VIP for externalIP>         foo-kplane.bar.io
```

{% endtab %}
{% endtabs %}

After updating the hosts file, open your web browser and log into the UI using the provided credentials. Then, follow the steps in the [Getting Started](https://docs.platform9.com/private-cloud-director/2025.10/getting-started/getting-started) guide to configure your hosts and set up your <code class="expression">space.vars.product\_name</code> environment.

## Important Files & Directories

The following directories contain various log and state files related to the <code class="expression">space.vars.product\_name</code> self-hosted deployment.

#### Directories\*\*:\*\*

* `/opt/pf9/airctl` – Contains all binaries, offline installers, Docker image tar files, miscellaneous scripts, and configuration files for airctl.
* `/opt/pf9/pf9-kube` – Managed by Nodelet. Stores binaries and scripts used to manage the management cluster.
* `/etc/nodelet` – Contains configuration files for Nodelet and certificates generated by it.
* `~/airctl-logs/` – Stores all logs related to the deployment.

#### Files\*\*:\*\*

* `/opt/pf9/airctl/conf/airctl-config.yaml` – Contains configuration information for the management plane.
* `/opt/pf9/airctl/conf/nodelet-bootstrap-config.yaml` – Stores configuration required to bootstrap the management cluster.
* `/var/log/pf9/nodelet.log` – Log file for troubleshooting issues with management cluster creation.
* `~/airctl-logs/airctl.log` – Log file for troubleshooting issues with management plane creation.
