Virtualized Cluster & Blueprint
A Virtualized Cluster in Private Cloud Director is a group of hypervisor hosts that Virtual Machines get provisioned on.
You can create one Virtualized Cluster per region. However, you are able to further slice the cluster into multiple subgroups of hosts using Host Aggregates. This allows you to combine hosts with similar characteristics into a Host Aggregate and target VM provisioning to that aggregate.
Support for adding multiple clusters per region is coming in the future.
Cluster Blueprint
A cluster blueprint allow you to describe and configure your virtualized cluster in a declarative, prescriptive manner. Blueprint is designed to help you express your desired cluster architecture upfront, and ensure that cluster capacity that is added over time conforms to this desired architecture.
Create a Cluster Blueprint
To deploy and use a virtualized cluster, your first step will be to describe your intended architecture for that cluster via a cluster blueprint.
Navigate to Infrastructure -> Cluster Blueprints in the Private Cloud Director UI to create your cluster blueprint.
Following are the parameters you need to specify while creating a cluster blueprint:
Cluster Resource Management
Cluster resource management provides two critical features that are widely used in the virtualization community:
Virtual Machine High Availability (VM HA)
Automatic Resource Rebalancing (ARR)
Virtual Machine High Availability (VM HA)
By default, virtualized clusters in Private Cloud Director are configured to provide Virtual Machine High Availability, i.e. fault tolerance via automatic recovery of VMs following a host failure.
Automatic Resource Rebalancing (ARR)
Automatic resource rebalancing monitors capacity allocation as well as realtime capacity utilization within the cluster, and attempts to rebalance the cluster to maximize headroom in each host in the cluster. Other rebalancing strategies enable administrators to configure other rebalancing strategies (these are not common).
Network Configuration
To describe your physical network architecture, you'll need to:
Configure global network parameters
Describe each physical network
Cluster Network Parameters
DNS Domain Name is used to ensure that when VMs are provisioned, DNS entries within the cluster reflect the FQDNs for the VMs that include this domain name. By default, clusters use an internal DNS resolution service, but can be configured to use an external DNS management service via a custom configuration.
Info
By default, clusters use an internal DNS resolution service, but can be configured to use an external DNS management service via a custom configuration.
Enable Distributed Virtual Routing
Distributed virtual routing distributes network routing components uniformly across all hosts in the cluster. DVR is enabled by default and is the recommended configuration for most environments. Disabling DVR will require you to designate a subset of the hosts in the cluster as "Network Nodes", i.e. nodes that are designated as having routing responsibility.
Enable Virtual Networks
To enable Virtual Networks, you'll need to configure the underlay technology that is used to tunnel virtual network traffic using an existing physical network. You can choose to use VLANs, VXLANs or Geneve as the underlay technology for virtual networks.
Within a cluster, you can only enable one underlay technology, and all hosts in that cluster will be configured automatically to use that underlay technology using the associated physical network interface. Any virtual networks that you create will be available to workloads running on all hosts in the cluster.
Host Network Configuration
As described in Overview & Architecture, you can now describe one or more physical networks that VMs provisioned on your cluster will use. You do this by creating one or more Host Network Configurations.
A Host Network Configuration allows you to specify networking characteristics of a group of hosts. You do this by defining one or more Network Interfaces, assigning them a Physical Network Label, and describing what type of traffic will flow through these interfaces.
Physical Network Label
A Physical Network Label gives a label to a physical interface on a host and allows you to direct Private Cloud Director to use network interfaces from different groups of hosts that may be named differently for the same physical network.
For example, say you have one set of hosts with interfaces named eth0, eth1 and eth2 and a second set of hosts with bonded interfaces with names bond0, bond1 and bond2. Now say you wish to use eth0 from first set of hosts and bond0 from second set of hosts for management traffic because they are both connected to the management network. You will do this by creating two separate Host Network Configurations, one for the first set of hosts and another for the second. But in both, you will provide the same Physical Network Label for eth0 and bond0, to indicate to Private Cloud Director that they belong to the same physical network even though they are named differently. You will then check the checkbox for 'management traffic' for this interface in both configurations.

Network Traffic Types
Following are the different types of network traffic you can specify for the interfaces
Management - This is the management traffic interface that you typically use for administrative tasks like configuration, monitoring, and remote access for your hosts. This interface will also be used for communication between your SaaS or self-hosted management plane and hosts.
VM console - This is the network interface that Private Cloud Director UI will use to load the virtual machine console via VNC using the hypervisor host's IP. In most setups, this will generally be same as your management traffic interface.
Image library - This interface will be used for lmage library traffic. The UI will use this interface to upload any images that the user tries to upload via the UI. It will also be used to serve the image contents to the hypervisor when a vm is being provisioned using that image. This will also generally be same as your management traffic interface in most setups.
Virtual network tunnels - This interface will be used behind the scenes to route traffic when new virtual networks get created.
Host liveliness check - This interface will be used by the virtual machine high availability service to check if the hypervisor host is up or down.
Cluster Block Storage Access
Clusters can be configured to use a variety of block storage environments. Private Cloud Director provides out of the box configuration of a smaller subset of block storage configurations, but we fully support the full list of certified Cinder compatible block storage devices.
To configure block storage, enable the backend that matches your existing storage devices, and provide the required access information to enable connectivity. If your storage device isn't listed separately, select the "Custom Cinder Backend", and specify the required access information via key-value pairs.
Customize Cluster Defaults
Clusters have additional default properties, which most users will not need to change. But you can customize these based on your requirements.
Last updated
Was this helpful?
