# Beginner's Guide to Deploying PCD Community Edition

Hey everyone! 👋 So, you're curious about setting up your own little cloud haven? I've got you covered with this simple guide to getting started with <code class="expression">space.vars.product\_name</code> Community Edition (CE). Think of it as your friendly "let's build something cool together" walkthrough.

## What's Community Edition?

This version of <code class="expression">space.vars.product\_name</code> is awesome for testing stuff out or if you're just starting small. You can deploy it on a bare-metal setup or inside a virtual machine. The infrastructure and workload regions run on the same VM, but a separate hypervisor host is needed. This can run as a VM alongside CE if needed. This guide will use a single bare-metal host to run the necessary VMs. Check out the [official docs](https://docs.platform9.com/private-cloud-director/getting-started/getting-started-with-community-edition) to learn more about <code class="expression">space.vars.product\_name</code> Community Edition.

## My Home Lab Setup

Since I'm a bit of a hardware geek (I build water-cooled gaming PCs in my spare time!), that's what we're using for our example. Here's the beast:

* Intel i9 12900k (16 cores, 24 threads)
* 64 GB RAM
* 2 TB SSD
* Nvidia 3090 Ti

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/wKoW25dgCG6lNva4KPMJ/nihff7xa7lhl5r53szj7jfn45hfk6b5tw3c3i97hct24k6f05wzyftqjwe93nzcs.jpg" alt=""><figcaption></figcaption></figure>

But hey, feel free to use whatever you've got lying around. The minimum hardware requirements for a CE host are:

* 8 CPUs
* 32GB RAM
* 100GB local storage

In order to create virtual machines, at least one hypervisor host must be available. The minimum hardware requirements for a hypervisor host are:

* 8 CPUs
* 16GB RAM suggested
* 100GB local storage suggested

## Let's Get Down to Business: Deployment Steps

Okay, here's the rundown of what we're going to do:

**1. Bare-Metal Hypervisor Install:** First, we'll install [Ubuntu Desktop](https://releases.ubuntu.com/jammy/ubuntu-22.04.5-desktop-amd64.iso) on our machine (highlighted in red in the diagram below). This will allow us to use the KVM hypervisor to spin up virtual machines that we will use to deploy Community Edition and our hypervisor host into.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/3dmRc0n9EaGlnZiHVflJ/ugoeeg8cc7k7cvphw6u07459g5s0u69sd1cwatpqj03e6qo4ua37dam8ztaf81xx.png" alt=""><figcaption></figcaption></figure>

**2. Private Cloud Director Community Edition Install:** We will then install Community Edition into the VM highlighted in red.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/V2qvpbhudo0HxhYfJK3L/840htyclhapznnw2oarxnoxdzpn8zi5zziu9azsxkgc7rh99s7qcxoi0emuk34si.png" alt=""><figcaption></figcaption></figure>

**3. Hypervisor Host Onboarding:** Finally, we will onboard a hypervisor host that will host the workload virtual machines on it. And yes, we will take full advantage of nested virtualization to make this happen.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/UN61b0aDwdKunrNIrTa9/9kzhvx2efpd4smivq9u93098uc48u1likmvque2d2p598flpfo7ert13sfj5zo86.png" alt=""><figcaption></figcaption></figure>

This is what everything will look like once we’re done. We will be onboarding a single hypervisor host, but feel free to onboard more (shown in dotted line below) if you have the resources available.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/MlJ6xmiAB6RcdJejVpC7/1f8zejoavpbb5km0uudrqhcl4i5c9ihykev0y1x1su51jrmvkjsr88tqnyrqlj48.png" alt=""><figcaption></figcaption></figure>

Let’s get started!

## Bare-Metal Hypervisor Install

Install [Ubuntu Desktop](https://releases.ubuntu.com/jammy/ubuntu-22.04.5-desktop-amd64.iso) on the bare metal host. We will use `virt-manager` GUI to easily manage our virtual machines. Launch `virt-manager` GUI with the following command.

{% tabs %}
{% tab title="Bash" %}

```bash
virt-manager
```

{% endtab %}
{% endtabs %}

Next, we will create an [Ubuntu Server 22.04](https://releases.ubuntu.com/jammy/ubuntu-22.04.5-live-server-amd64.iso) virtual machine to host <code class="expression">space.vars.product\_name</code> CE. Navigate to **File > New Virtual Machine.**

Follow the prompts to create the virtual machine. Ensure that the following resources are assigned to the virtual machine:

* 8 vCPU
* 32 GB RAM
* 100 GB local storage

Launch the VM, and follow the prompts to install Ubuntu Server.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/jbjMrqraI8idq13VTxoq/ukaabmur3hx7rexq6oqr8vtlya5cjm2cjq4g94c9drknyp84ronlaj2ep81amvuc.png" alt=""><figcaption></figcaption></figure>

Follow the prompts till you reach the **Guided storage configuration** screen. For simplicity, uncheck LVM under the storage configuration menu

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/w1w1Ivm3Pf6KEtJWkqkq/74vt4i9bn7h10ag0dji1wbpjtsl7grdhy54ehvsaie2621ajkqxhzxd2don5vlhg.png" alt=""><figcaption></figcaption></figure>

If you choose to enable LVM, ensure that the logical volume is expanded to take up the entire physical partition after installation is complete. Installing CE on a volume with less than 50 GB of space will result in failure, even if the underlying partition is larger.

Below are helpful commands to resize the logical volume.

Command to expand LVM to take up the entire partition:

{% tabs %}
{% tab title="Bash" %}

```bash
sudo lvresize -l +100%FREE /dev/mapper/<logical volume name>
```

{% endtab %}
{% endtabs %}

Example:

{% tabs %}
{% tab title="Bash" %}

```bash
sudo lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
```

{% endtab %}
{% endtabs %}

Command to resize filesystem to match the logical volume:

{% tabs %}
{% tab title="Bash" %}

```bash
sudo resize2fs /dev/mapper/<logical volume name>
```

{% endtab %}
{% endtabs %}

Example:

{% tabs %}
{% tab title="Bash" %}

```bash
sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
```

{% endtab %}
{% endtabs %}

## Private Cloud Director Community Edition Install

Now we are ready to deploy <code class="expression">space.vars.product\_name</code> CE to the virtual machine. Launch the VM that we just created. Run the commands below to switch to root and begin the deployment process.

{% tabs %}
{% tab title="Bash" %}

```bash
sudo su -
curl -sfL https://go.pcd.run | bash
```

{% endtab %}
{% endtabs %}

The final deployment step is long-running and takes around 45 minutes to complete.

Once the deployment completes, you will be presented with the <code class="expression">space.vars.product\_name</code> FQDN and login credentials.

Next, we will add DNS entries to our Ubuntu Desktop environment so we can access the <code class="expression">space.vars.product\_name</code> UI from here. Replace `172.16.122.183` with the IP address of the Private Cloud Director Community Edition VM we just deployed. This entry resolves requests to the correct IP address when attempting to reach `pcd-community.pf9.io` or `pcd.pf9.io`.

{% tabs %}
{% tab title="Bash" %}

```bash
echo "172.16.122.183 pcd-community.pf9.io" | sudo tee -a /etc/hosts
echo "172.16.122.183 pcd.pf9.io" | sudo tee -a /etc/hosts
```

{% endtab %}
{% endtabs %}

From the Ubuntu Desktop environment, navigate to `pcd-community.pf9.io` in a web browser. If everything has gone well, you will see the <code class="expression">space.vars.product\_name</code> login screen.

Leave the Domain as default, choose "Use local credentials" at the top right, and login with the credentials provided when the Community Edition install completed.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/J37QvqRDxH0pgQg8yjIm/vcp0n388nfqsibnwbn76ldqehqun173pyoou314zc8lw24bntv6xqc9bu6yibm4s.png" alt=""><figcaption></figcaption></figure>

## Hypervisor Host Onboarding

Now we will create a new VM that will serve as our hypervisor host to our workload VMs. Similar to how we created the CE VM, create another Ubuntu Server VM with the following resources:

* 8 CPUs
* 16GB RAM
* 100GB local storage

Back in the UI, navigate to **Infrastructure** > **Cluster Blueprint**. Fill out the required fields as shown below and hit Save Blueprint.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/gabPMHrb8CCjkl05Wt3w/8x0y454v48z2b8am28v85gr4kvk5necrcbk06y4083xp8rvdp3uy7yibts7ixc46.jpg" alt=""><figcaption></figcaption></figure>

The **Network Interface** refers to the name of the Ethernet network interface on the hypervisor host. You can view the network interfaces on your hypervisor host by running the following command.

{% tabs %}
{% tab title="Bash" %}

```bash
ip link show
```

{% endtab %}
{% endtabs %}

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/c3qC0K2BZjm5cwq6773t/96jfcj7mb0atcyw4xfc8npamjstdor62mo5qadvsrt6mhtj0ibsof7lziofmp5ea.png" alt=""><figcaption></figcaption></figure>

Now we will onboard the new hypervisor host onto PCD. Navigate to **Infrastructure** > **Cluster Hosts** > **Add New Hosts** button on the top right.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/fJQfbWiMhFawZy3QGdJH/hjcdj71wn2yakojkdbar501gleba5mkf97l3te0s8clb5ad4z4fo5qkbqawxl6vl.png" alt=""><figcaption></figcaption></figure>

Before running the steps displayed, connect to the hypervisor host VM that we just created and add DNS entries like we previously did for our Ubuntu Desktop environment.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/mo7PxxI3pbMWIwmBOSvu/73xwep0cxqp9h2179t6y1vntwd8nxaw4gabf3zsh5zzvdlfwgn1pn70ka0rmgb61.png" alt=""><figcaption></figcaption></figure>

Proceed to execute commands from the UI in your hypervisor host VM to onboard this host to PCD.

{% tabs %}
{% tab title="Bash" %}

```bash
bash <(curl -s https://pcdctl.s3.us-west-2.amazonaws.com/pcdctl-setup)
```

{% endtab %}
{% endtabs %}

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/6jkpPW0s5kdMYrJv9SkC/sz1o83l9vhii8ee90tgbf0qlj9dx89gveoaetaoorjpqm2shfo76d92kqaeb5uyu.png" alt=""><figcaption></figcaption></figure>

For the second command, skip prompts for Proxy URL and MFA Token by hitting enter. Enter Account URL, Username, Region, and Tenant as shown on the UI. Enter your password when prompted.

{% tabs %}
{% tab title="Bash" %}

```bash
pcdctl config set -u https://pcd-community.pf9.io -e admin@airctl.localnet -r Community -t service
```

{% endtab %}
{% endtabs %}

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/lK2f4XAbkcXtiJxaqfSQ/ukedz4ud8g8727twngjgp33hnbc6jyykxzf8h1gqes6g8xa56cl5k9df6odbd9ef.png" alt=""><figcaption></figcaption></figure>

Finally run the third command.

{% tabs %}
{% tab title="Bash" %}

```bash
pcdctl prep-node
```

{% endtab %}
{% endtabs %}

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/pGlfq4bIgTW8ymt27vQv/6fyqg6lo0otsy3vhx34cufpg4bbqi66ym3g99qu1qodkyy1wn8evea6kfci75fak.png" alt=""><figcaption></figcaption></figure>

Once the host provisioning process completes, you will see the host in the UI under **Infrastructure** > **Cluster Hosts**.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/8kH9VJOiJmWU0pKHHESy/v3wew33nuishh4phmelrvxpqz4hg5z7u482pmfz099rwyq1716x1myljcdzivgzd.png" alt=""><figcaption></figcaption></figure>

## Creating a Virtual Machine with Persistent Storage

We will now create persistent storage that can be used by VMs. We will create a Network File System (NFS) share in the Ubuntu Desktop environment that will be made available to PCD VMs.

Install dependencies with the following command:

{% tabs %}
{% tab title="Bash" %}

```bash
sudo apt install nfs-kernel-server
```

{% endtab %}
{% endtabs %}

Create the directory to be shared and update permissions.

{% tabs %}
{% tab title="Bash" %}

```bash
sudo mkdir -p /srv/nfs/shared
sudo chmod 777 /srv/nfs/shared
```

{% endtab %}
{% endtabs %}

Update config file with NFS share configuration.

{% tabs %}
{% tab title="Bash" %}

```bash
sudo nano /etc/exports
```

{% endtab %}
{% endtabs %}

Update the contents of the file to the following.

{% tabs %}
{% tab title="Bash" %}

```bash
/srv/nfs/shared *(rw,no_subtree_check)
```

{% endtab %}
{% endtabs %}

The asterisk allows connections from any IP address. This isn’t ideal for any real-world scenarios, but we do this here for the sake of simplicity. **rw** allows read-write access.

Restart the NFS server and check status.

{% tabs %}
{% tab title="Bash" %}

```bash
sudo systemctl restart nfs-kernel-server
sudo systemctl status nfs-kernel-server
```

{% endtab %}
{% endtabs %}

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/QL1y9oie3gXFCAAuLxwx/kck8hgdhzx5788ttyrvs2uba9rrtnwoleyiil7ftyaisjppiatv6t7eyrrarn1re.png" alt=""><figcaption></figcaption></figure>

We are now ready to connect to this NFS path from <code class="expression">space.vars.product\_name</code>. Navigate to **Infrastructure** > **Cluster Blueprint.** Under **Storage Volume Types**, type in a name for the **Volume Type** and click **Add Configuration**.

Name your volume configuration, select NFS as the Storage Driver, and use the following as the **nfs\_mount\_point**. Replace **`192.168.1.206`** with the IP address of your Ubuntu Desktop that we used to create the NFS share.

{% tabs %}
{% tab title="Bash" %}

```bash
192.168.1.206:/srv/nfs/shared
```

{% endtab %}
{% endtabs %}

Finally, navigate to **Infrastructure** > **Cluster Hosts**, select the host we onboarded a few steps ago, and click **Edit Roles**. Assign the Hypervisor role to the host by checking the box in the Hypervisor column. Under persistent storage select the NFS configuration we created. Click **Update Role Assignment**.

It’s time to spin up a VM on our hypervisor host. First, we upload an image that will be used to create the VM. I am using [CirrOS](https://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img) for its small footprint. Download that to your Ubuntu desktop. Navigate to **Images** > **Images** and click on the **Add Image** button in the top right. Select the CirrOS image, choose the **qcow2** image type, and click **Add Image**.

Next, create a virtual network that we will attach to our VM. Navigate to **Networks and Security** > **Virtual Networks** and click on **Create Network** button on the top right. Configure the virtual network as follows.

Network configuration:

* Give the network a name
* Leave the rest of the Network Configurations defaults as set

Subnet configuration:

* Give the subnet a name
* Set or leave IPv4 as the default
* Enter `10.0.0.0/16` for the Network Address CIDR

Leave everything else as set and click **Create Network**

Finally, navigate to **Virtual Machines** > **Virtual Machines** and click the **Deploy New VM** button in the top right corner. Use the following steps to deploy the VM.

* Give the a VM a name
* Choose the cluster
* Boot the VM from a new 20GB volume on the NFS volume type
* Select the CirrOS image and click Next.
* The next screen may give you an option to add available volumes to the VM. This screen can be skipped by clicking Next.
* Choose the `m1.tiny.vol` flavor.
* Choose the virtual network that was previously created. If that step was skipped, you'll need to create the virtual network before moving forward with VM deployment.
* On the final screen of the deployment wizard, you can leave all of the defaults as-is. Since Cirros is not a cloud-init enabled image, you will not need to set a password during deployment.
* Click Deploy VM and Finish, if needed.

You should now see the VM in your **Virtual Machines** tab.

Select the VM and click on the **Console** button to launch into the VM.

<figure><img src="https://content.gitbook.com/content/SNWOoFOMzRblbHdwmlrR/blobs/hxKrLbJ5uDF20QRS1akp/ugl6pkjcb0nukffsw5g06hjjxk7sw7bf3pzusr1wyxprhcs5hnqho58je8u1fy6b.png" alt=""><figcaption></figcaption></figure>

From here, you can login using the `cirros` user and the default password `gocubsgo`. You can validate that the image was deployed on a 20GB volume with the following command:

```bash
df -h /
```

The filesystem should show a size of approximately 19.4GB.

Congratulations on making it to the end! We started with a bare metal Ubuntu installation and deployed a VM in an enterprise-grade workload management solution. Please give <code class="expression">space.vars.product\_name</code> Community Edition a spin and let us know how things go.

Head over to our [subreddit](https://www.reddit.com/r/platform9/) and join the community!
