Block Storage

When configuring block storage for your Private Cloud Director setup, your first step is to configure block storage role on one or more of your hypervisors, and create one or more storage types and storage backend configurations, as part of Cluster Blueprint creation.

Once storage is configured, administrators or end users can create block storage volumes and attach to virtual machines.

Block Storage Service Configuration

As part of configuring your cluster blueprint, you get to define one or more block storage storage types that the clusters in your region will utilize.

Clusters can be configured to use a variety of block storage options. Private Cloud Director provides out of the box configuration of a smaller subset of block storage configurations, but we fully support the full list of certified Cinder compatible block storage devicesarrow-up-right.

To configure block storage, enable the backend that matches your existing storage devices, and provide the required access information to enable connectivity. If your storage device isn't listed separately, select the "Custom Cinder Backend", and specify the required access information via key-value pairs.

Assigning Hypervisors Block Storage Role

When configuring block storage for your Private Cloud Director setup, you will configure one or more hypervisor hosts in your cluster as a 'Block Storage Node'. These hosts are configured with the Block Storage service data-plane components, enabling interoperability of your cluster with all Cinder compatible storage devices. Block Storage hosts integrate with the virtualized cluster control plane and communicate with the backend storage device for storage provisioning operations. For instance, a Block Storage host will help forward a volume creation command to a NetApp block storage backend when a VM is being created. The actual data-path for storage I/O will be directly between the Hypervisor where that VM is being created, and the storage backend.

Using Linux LVM for Block Storage

You can use linux LVM as the storage option for your block storage requirements, where you will designate a subset of your hypervisor nodes as block storage nodes. Each block storage node will then serve block volumes to virtual machines using local storage available to it. Behind the scenes this mechanism uses the Logical Volume Manager (LVM) component included as part of linux to manage locally attached storage.

While this may work for some use cases, it's major limitation is the lack of redundancy. The block storage nodes act as independent “storage arrays” that do not share data between each other. If a block storage node fails, all volumes exported by that node, as well as the data on it, become unavailable.

Using Enterprise Block Storage

This is the most popular and recommended option for production environments. Platform9 Private Cloud Director integrates with a wide variety of enterprise storage solutions and exposes their native capabilities such as dynamic volume resizing, QoS, replication, etc.

Here's a list of commonly used and Supported storage drivers. Also see supported vendor matrix below for a full list.

Supported Vendors

Private cloud director uses OpenStack Cinder behind the scenes to serve block storage, and supports all certified Cinder-compatible block storage devices.

Refer here for the list of supported block storage vendor driversarrow-up-right. Click on the link specific to the driver for your specific enterprise storage in order to get details about what storage specific configuration is supported. In the document you will find out more details about the supported the key value pair metadata for your specific storage.

Supported Capabilities Matrix

Refer to this matrix for the exact list of capabilities supported by your storage vendor:

Supported Storage Drivers & Capabilities Matrixarrow-up-right

Storage Configuration

When configuring block storage backend as part of cluster blueprint creation, the Private Cloud Director UI shows a subset of supported drivers in the dropdown list. If you select one of the drivers listed, the UI will auto-populate the required key-value metadata for that driver for you to fill up. However, if your specific storage device is not explicitly listed here, you can still configure your storage backend, as long as it's supported based on the supported list of storage vendor drivers listed above, using key-value pair options in the Cluster Blueprint.

The backend configuration is applied via the Persistent Storage Role on a selected host.

To define a Custom storage backend using the Cluster Blueprint, you need to:

  1. Navigate to the Cluster Blueprint configuration under Infrastructure in the UI.

  2. Locate the section called Persistent Storage Connectivity.

  3. Select the Add Volume Type under Storage Volume Types option.

  4. Click Add Configuration and select Custom storage driver option from dropdown.

  5. Provide the necessary access information for your storage device via key-value pairs.

This would include information such as the storage's IP address, authentication details, and any other specific parameters required by the storage device’s cinder driver.

VM Storage Options

When provisioning a virtual machine, the following storage options are available:

  1. Boot from an Image

    1. The VM disk is created on the local storage on the hypervisor, using the image you choose.

    2. This is a temporary storage option and is not persistent.

  2. Boot from a New Volume

    1. A persistent volume is created on the block storage configured for the cluster.

    2. This ensures data retention even if the VM is deleted.

  3. Boot from an Existing Volume

    1. A new VM is launched using a pre-existing storage volume.

    2. This is useful for cloning or restoring workloads.

Debugging Block Storage Service Problems

If your Private Cloud Director Service Health dashboard is reporting that the block storage service is unhealthy, this likely means:

  1. One or more hosts assigned with Block Storage role may be offline or unavailable

  2. The hosts with Block Storage role assigned may be online but the service components may be offline or experiencing error.

Refer to the block storage service Log Files to debug the issues further.

Log Files

** The log file that corresponds to all block storage attach / detach operations is located at /var/log/pf9/cindervolume-base.log. Use this file when debugging any issues with respect to block storage volume attachment or detachment to virtual machines. (Also look at the compute service log file ostackhost.log file for further details when debugging any volume attachment issues.) **

Last updated

Was this helpful?