Host
Learn how to add and configure Hosts in Private Cloud Director for your virtualized cluster. Discover Host roles, management, and troubleshooting tips to ensure optimal performance and uptime for your
A Host is a physical machine that you supply to Private Cloud Director as a hypervisor. Each Host contains the resources needed for your cluster, such as virtual machines, storage, and networking components. Once authorized and configured, you can deploy virtual machines on top of the Host.
You can add multiple Hosts to your Private Cloud Director virtualized cluster. After you configure your Cluster Blueprint, Private Cloud Director has the information it needs to configure Hosts that you add to the cluster.
Learn how you can add a Host to your virtualized cluster, assign roles, and configure it for production use. You will also learn how to manage the Host lifecycle and troubleshoot common issues.
Hypervisor
Private Cloud Director standardizes on and uses open source KVM hypervisor behind the scenes. KVM is a type 1 hypervisor that runs directly within the Linux kernel on the host hardware. KVM hypervisor relies on and leverages hardware virtualization extensions from x86 and/or AMD processors to provide full virtualization capabilities.
Understanding Host Agent and Roles
Host Agent
The Platform9 Host agent is the first component you install for each Host. The Host agent enables you to add Hosts and configure their roles in your virtualized cluster. Based on the assigned role to each Host, the agent downloads and configures the required software, integrating with the Private Cloud Director management plane.
The Host agent also provides ongoing health monitoring of the Host, including the detection of failures and errors. It helps Platform9 orchestrate upgrades when you choose to upgrade your Private Cloud Director deployment to a newer version.
Host Roles
As part of the Host authorization process, you can configure Hosts to perform specific functions by assigning them one or more roles. The following roles are supported:
Hypervisor
The Hypervisor role enables the Host to function as a KVM-based hypervisor in your virtualized cluster. It is recommended you assign this role to all Hosts in your cluster, unless you experience performance bottlenecks and want to avoid running VM workloads on select Hosts that have other roles such as image library or storage roles.
Image Library
Every cluster needs at least one Image Library, which hosts the cluster copy of virtual machine source images from which you can provision new VMs. See more on Configuring Image Library Role here.
Persistent (Block) Storage
You can typically configure one or more Hosts in your cluster as a Block Storage Node. See more on Block Storage Service Configuration.
Advanced Remote Support
For specific troubleshooting situations, Platform9 support teams may request access to gather detailed telemetry from a Host that experiences problems automatically. This mode is turned off by default. To enable Advanced Remote Support, contact Platform9 Support.
DNS
Enables DNS as a Service (DNSaaS), which is an optional component.
Prerequisites
Before adding a Host, ensure that your Host meets the prerequisites and that you have configured in your Cluster Blueprint.
Verify that your Host meets the Pre-requisites for Private Cloud Director.
Ensure you have administrative access to the Host.
Confirm that your Cluster Blueprint is configured in the Private Cloud Director console.
Confirm that you have created a Cluster in the Private Cloud Director console.
Add a Host
To add a host to Private Cloud Director, you need to follow the steps to install the Private Cloud Director agent software on your physical host, then follow the steps to add it to Private Cloud Director.
Step 1: Add a Host
The process for adding a host varies depending on which Platform9 product you are using. Choose the appropriate method below based on your deployment type:
SaaS Deployment
Self-Hosted Deployment
Community Edition Deployment
For SaaS deployments, adding a Host is on the Private Cloud Director console with minimal configuration requirements.
Navigate to Infrastructure > Cluster Hosts on Private Cloud Director console.
Select Add New Hosts
Follow the on-screen instructions. You are required to run these pcdctl commands using
sudoprivileges. The command requires you to add values specific to your environment that are provided on your Private Cloud Director console. It takes about 2-3 minutes to download and install the Platform9 Host agent and other necessary Platform9 software components.
You have now successfully added a Host to your SaaS deployment. Continue to Step 2: Authorize Host and Assign Roles
For self-hosted deployments, you may need to manually configure DNS entries before adding the Host.
Step 1: Add a Host to Self-Hosted Deployment
Verify nested virtualization
You can choose to verify how the nested virtualization works in a VM. Check for virtualization support inside the VM by running:
egrep "svm|vmx" /proc/cpuinfoAdd DNS entries to each Host
An FQDN is a fully-qualified domain name for your Private Cloud Director installation. You will need both infrastructure and workload region FQDNs for your self-hosted deployment.
As a root user, add DNS entries on the hypervisor Host for both the infrastructure and workload region FQDNs: /etc/hosts file:
echo "<IP> <FQDN-infrastructure-region>" | tee -a /etc/hosts
echo "<IP> <FQDN-workload-region>" | tee -a /etc/hostsReplace <IP> with your management plane IP address and the FQDNs with your specific domain names.
Here is a sample example:
echo "10.9.11.246 pcd.pf9.io" | tee -a /etc/hosts
echo "10.9.11.246 pcd-community.pf9.io" | tee -a /etc/hostsNavigate to Infrastructure > Cluster Hosts and the select Add New Hosts.
Follow the on screen instructions. You are required to enter the administrative user password when prompted. For more details on
pcdctlCLI see PCD CLI - pcdctl
You have now successfully prepared your Host for self-hosted deployment. Continue to Step 2: Authorize Host and Assign Roles
For Community Edition deployments, the process is similar to self-hosted but uses specific FQDNs for the community infrastructure, unless configured otherwise.
Step 1: Add a Host to Community Edition Deployment
Verify nested virtualization (for VM Hosts)
If you want to verify that nested virtualization works in a VM, check for virtualization support inside the VM:
egrep "svm|vmx" /proc/cpuinfoIf the command returns results, your VM supports nested virtualization and can run other virtual machines.
Configure DNS entries
By default, Community Edition uses these specific fully-qualified domain names, unless you have customized them using the Deployment URL & region name section of Custom Installation.
Workload region FQDN:
pcd-community.pf9.ioInfrastructure region FQDN:
pcd.pf9.io
Add DNS entries to your Host
Log in to your Host as a root user and add DNS entries for both FQDN by running the following command:
echo "<IP> <FQDN-infrastructure-region>" | tee -a /etc/hosts
echo "<IP> <FQDN-workload-region>" | tee -a /etc/hostsReplace <IP> with your management plane IP address and the FQDNs with your specific domain names.
Here is a sample example:
echo "10.9.11.246 pcd.pf9.io" | tee -a /etc/hosts
echo "10.9.11.246 pcd-community.pf9.io" | tee -a /etc/hostsAdd the Host through the Private Cloud Director console.
Navigate to Infrastructure > Cluster Hosts and then select Add New Hosts.
Follow the on screen instructions. You are required to enter the administrative user password when prompted. For more details on
pcdctlCLI see PCD CLI - pcdctl
You have now successfully prepared your Host for Community Edition deployment. Continue to Step 2: Authorize Host and Assign Roles
Step 2: Authorize Host and Assign Roles
A successfully configured Host is accessible on Infrastructure > Cluster Hosts with an Unauthorized status, indicating an authorization and cluster role assignment.
Navigate to Infrastructure > Cluster Hosts and select a specific Host.
Select Edit Roles to configure the appropriate roles for the Host based on your cluster architecture.
Configure roles based on your requirements:
Hypervisor Role: Enables the Host to function as a KVM-based hypervisor. Read more on Hypervisor Role.
Networking Service Configuration: Select appropriate Host Network Config for the Host networking requirements. Read more on Networking Service Configuration.
Image Library Role: Configures the Host to store VM images for the cluster. Read more on Configuring Image Library Role here.
Block Storage Role: Enables the Host to provide persistent storage services. Read more on Configure a Host with Block (Persistent) Storage Node Role.
Advanced Remote Support: Enables Platform9 support to gather detailed telemetry for troubleshooting purposes. Read more on Enabling Advanced Remote Support.
DNS Checkbox: Enables DNS as a Service (DNSaaS), which is an optional component. The DNS checkbox is for DNS as a Service (DNSaaS). Read more on Configuring DNS-as-a-Service.

Select Update Role Assignment
The Private Cloud Director management plane works with the Platform9 agent installed on your Host to configure the required software. This process typically takes 3-5 minutes to complete. During this time, your Host status changes to converging in the Private Cloud Director console.
You have successfully authorized your Host and assigned roles. Your Host is now being configured for use in your cluster.
Step 3: Monitor Host addition status
While your Host is in the converging state, you can monitor the configuration progress by examining the Host agent log files.
Locate the Host agent log file.
The Host agent log files are located on your Host. See the Log FilesFiles section for detailed information about log file locations. The primary Host agent log is located at /var/log/pf9/hostagent.log on your Host.
Monitor the configuration progress.
Tail the log file to monitor the status of host addition:
tail hostagent.logVerify successful completion
Monitor the log output for completion indicators. Once the Host is authorized and the role assignments have taken effect, your Host status changes from converging to ok in the Private Cloud Director console.
Your Host is now ready to use and can run workloads according to its assigned roles.
Manage Host lifecycle
Remove a Host
Entirely removing a Host from your Private Cloud Director setup is a two-step process. You must first remove all roles assigned to the Host, and then, if necessary, decommission the Host to clean up any Private Cloud Director related data associated with it. You must perform both steps if you plan to re-add the Host to your current or another product name setup.
Step 1 - Remove all roles and deauthorize a Host
Removing all roles from a Host is the first step toward entirely removing a Host from your Private Cloud Director setup. Removing all roles uninstalls any specific packages and software components assigned to the Host.
Prerequisites before removing a Host from Private Cloud Director setup.
If the host is assigned
hypervisorrole, make sure that no VMs are running on the host usingsudo virsh list --allThe expected output is an empty list of VM UUIDs and their corresponding statuses.If the host is assigned a
persistent storagerole, make sure that this host is serving no storage volumes.If the host is assigned
image libraryrole, ensure that the host is serving no images in the image library. On the image library host, check if the image UUID exists in the/var/lib/glance/images/glancedefault directory, or if a custom image directory was added, check that location.
Navigate to the Cluster Hosts on the console.
Select the specific Host.
From the Actions bar dropdown, select Remove all roles.
Remove all roles using CLI:
Use the following pcdctl command to remove all roles from a Host:
pcdctl deauthorize-nodeStep 2 - Decommission a Host
When you remove all roles from a Host using the command above, any Private Cloud Director specific packages and software components associated with those roles are uninstalled and removed from the Host. However, any directories where the packages were installed are not cleaned up or deleted. This ensures that you still have access to the log files for those components if required for debugging.
To remove these directories and clean up any Private Cloud Director related data from the Host, you need to run the decommission Host command.
Prerequisites:
You must remove all roles from the Host using the Private Cloud Director console or CLI before decommissioning.
Always back up important data, such as log files and configuration files, from the Host before decommissioning.
You must decommission a Host before you can add it again to your current or any other Private Cloud Director setup. Not doing so results in problems when re-authorizing the Host in the Private Cloud Director setup.
Decommission using CLI:
Currently, decommissioning a Host can only be performed using pcdctl CLI.
Use pcdctl to decommission a Host by running the following command:
$ pcdctl decommission-node
Do you wish to decommission the node? (y/n) y
Checking if any roles exist on the host
Cleaning up the node
Decommission SuccessfulOnce the command executes successfully, the Host is removed from the list of active Hosts in the Private Cloud Director console. If you encounter errors during decommissioning, check the logs for details and ensure the Host is reachable.
Host Properties
Host ID
Each hypervisor host gets a system-assigned ID when it's created. By default, the ID value is not shown in the host grid UI, but you can view the ID information by clicking on the 'Manage Columns' button on the cluster hosts grid view, then selecting the ID field to be displayed. You can also view a host's ID on the host details view by clicking on the host name from the host grid view. You can also query it from the pcdctl CLI by running pcdctl hypervisor list command or pcdctl hypervisor show <hypervisor-name> command where <hypervisor-name> is the name of your hypervisor host.
Host Connection Status
Host connection status, represented by the 'Connection Status' column in the Cluster Hosts grid in the UI, represents the status of connectivity between the host agent and the Private Cloud Director management plane. The following are the different status values:
online - The Host agent is connected to the Private Cloud Director management plane and responds to heartbeats.
offline - The Host agent is unable to connect to the Private Cloud Director management plane due to the host being in a powered-off state or because the Host is running. However, the Platform9 host agent may be experiencing issues connecting with the management plane. For more information on debugging steps, refer to Host Issues.
Host Role Status
Host role status, represented by the 'Role Status' column in the Cluster Hosts grid in the UI, indicates whether the host has been added to any virtualized cluster and assigned roles within the cluster. The following are the different role status values:
unauthorized - The host has Private Cloud Director host agent installed but has not yet been added to a cluster or assigned any specific roles.
applied - The host is assigned to a cluster, has specific roles assigned to it within that cluster, and those roles have been successfully applied.
converging - A new role is being applied to the host and/or the host is being authorized and added to a cluster
failed / error - The host appears to be in a failed or error state. For more information, refer to Host Issues.
unknown - The role status is displayed as unknown when the host connection status is offline, as it does not know if the application of any roles was in progress or if it was successful.
Host OverCommitment / Allocation Ratios
Host allocation ratios for CPU and Memory enable you to specify the amount of resource overcommitment you would like to configure for this host.
The CPU allocation ratio compares the total number of vCPUs across all virtual machines hosted on this machine to the total number of physical cores. For example, a value of 1 indicates no overallocation. Each virtual CPU will correspond to a single physical core. A value of 5 indicates an overcommitment or oversubscription of 1 physical core to 5 virtual CPUs. The default CPU allocation ratio is 1:16. So by default, each physical core can be oversubscribed across 16 virtual CPUs.
The Memory allocation ratio compares the total RAM across all virtual machines placed on this host to the total number of physical RAM. For example, a value of 1 indicates no memory overallocation. Each GB of RAM allocated to a VM will correspond to a GB of physical RAM on the host. A value of 5 indicates an overcommitment or oversubscription of 1:5 between physical vs virtual RAM. The default memory allocation ratio is 1:1.5. So by default, each physical GB of RAM can be oversubscribed to 1.5GB of virtual RAM.
The Disk allocation ratio compares the total ephemeral disk space allocated across the root disk of all virtual machines placed on this host that are using ephemeral root disk to the total amount of physical ephemeral storage available on this host. For example, a value of 1 indicates no overallocation or oversubscription for ephemeral storage; each GB of ephemeral storage allocated to a VM will correspond to a GB of physical storage on the host. A value of 5 indicates an overcommitment or oversubscription of 1:5 between physical vs virtual storage. The Default disk allocation ratio is 1:1. Therefore, by default, there is no oversubscription for virtual ephemeral disks, and each 1 GB of virtual ephemeral disk space will be allocated to 1 GB of physical disk space.
Host Aggregates
A host aggregate is a group of hosts within your virtualized cluster that share common characteristics. Read Host Aggregate for more information on how to configure them.
Debugging Compute Service Problems
If your Private Cloud Director Service Health dashboard indicates that the Compute Service is unhealthy. In that case, it may be because a large percentage of your hosts with the Hypervisor role assigned are either offline or the Compute service is unresponsive on those hosts. Refer [to the log files](to the log files) to debug the issue further.
Important Directories
/var – All logs go under /var/log/pf9. The only exception is the pcdctl log files which go under /var/log
/opt – All the packages for services installed by Private Cloud Director go under /opt/pf9
/var/opt/pf9 - Subdirectories for the Platform9 host agent and networking service go under here with temp files or state files.
Log Files
Essential log files for debugging:
Log files for all services - Each host stores all its log files for the various components running on it at
/var/log/pf9. Here you will find logs for compute, image library, storage, networking, and other services, depending on the roles assigned to that host. See the documentation for each service for more information about its log files.Compute service log - The log file for the compute service is located
/var/log/pf9/ostackhost.logon all hosts with the hypervisor role assigned. Useful for debugging issues with virtual machine creation or updates.Host agent log - The log file for the Platform9 host agent that is installed on each host is located at
/var/log/pf9/hostagent.log. This is helpful for debugging issues with host agent install or connectivity with the management plane.Communication agent log - Located at
/var/log/pf9/comms/comms.log. Log file for the Platform9 communications agent, which is responsible for ensuring the health and uptime of the host agent. Helpful for debugging issues regarding connectivity.Libvirt logs - Located at
/var/log/libvirt/qemu/<vm-id>where<vm-id>should be the UUID of the VM, and at/var/log/libvirt/libvirtd.log. Libvirt logs help with debugging any resource allocation or other issues with virtual machine instances.
Last updated
Was this helpful?
