Block Storage Pre-requisites

Private Cloud Director block storage service can integrate with a wide variety of enterprise storage backends and protocols (iSCSI, NFS, FC, RBD, NVMe-oF, etc.). Each protocol has a different set of prerequisites in terms of packages, configuration, and infrastructure.

LVM over iSCSI

Packages required:

lvm2,targetcli or tgt

Use the commands below to install these packages.

$ sudo apt install lvm2
$ sudo apt install targetcli-fb
#OR 
$ sudo apt install tgt

System Pre-requisites:

A dedicated Volume Group (VG) named cinder-volumes is required by the LVM driver. This VG must be created before the Block storage service is deployed.

Steps to configure the cinder-volumes volume group on the block storage host:

  1. Identify the block device to be used (e.g., /dev/sdb).

  2. Create a Physical Volume (PV) and then the Volume Group (VG):

$ sudo pvcreate /dev/<dev name> && vgcreate /dev/<dev_name> cinder-volumes

Example:

Ensure that the iSCSI target service is enabled and running, as it is required to expose LVM volumes as iSCSI targets. This should be enabled on any host with block storage role applied for LVM.

For targetcli ( LIO ):

For tgt:

Verify service status:

Additionally , the iscsid daemon should be running on all hosts (hypervisors and hosts with block storage service role assigned) as this service manages iSCSI sessions and maintains connections to the iSCSI targets.

NFS

Packages required:

nfs-common

System Prerequisites:

The nfs backend enables the block storage service to use shared nfs volumes for provisioning persistent volumes. This is a common setup for environments that already use nfs for centralized storage.

On the nfs server (not the host with block storage service role assigned), run the following command:

Create the export directory:

Add export to /etc/exports:

Export the share and start the service

On all other hosts (hosts with block storage service and hypervisor roles assigned):

The nfs client package should be installed on the hosts with block storage service and hypervisor roles assigned to be able to mount the share to access these volumes

This provides:

  • nfs client binaries (mount.nfs, etc.)

  • Support for NFSv3 and NFSv4

  • No server components (safe for client-only use)

Fibre Channel (FC)

The fibre channel (FC) backend enables the block storage service to provision block storage from storage arrays that support fiber channel protocol (HPE/DELL etc ). This setup requires both hardware (FC HBAs, switches, and SAN) and software (Cinder drivers, multipath, etc.) configured on compute and storage nodes.

Hardware Prerequisites

  • Fibre Channel SAN with LUNs provisioned

  • FC Switches ( Zoning configured )

  • Fibre Channel HBA ( Host Bus Adapters ) installed and connected on:

    • Block Storage hosts

    • Hypervisor hosts

Use lspci to verify FC HBA:

Required Packages

Install these on both Hypervisor and Block storage nodes:

Enable & Configure Multipath I/O (MPIO)

Multipath ensures high availability and performance across multiple FC paths.

Enable multipath service:

Create multipath configuration with any storage vendor specific recommended config.

File: /etc/multipath.conf basic example for HPE 3PAR Storage

Then reload:

Check multipath devices:

CEPH

The PCD Block Storage service integrates with Ceph RADOS Block Device (RBD) to provide distributed, scalable block storage.

You need a working Ceph cluster before integrating it with PCD Block Storage service.

Ceph cluster must have:

  • MONs ( Monitors )

  • OSDs ( Object Storage Daemons )

  • ( Optional ) MGRs, MDS ( for advanced features )

Requirements:

  • Ceph version: Recommended Nautilus or newer (Octopus, Pacific, etc.)

  • Network reachability from Cinder nodes to Ceph MONs

  • A Ceph pool for Cinder volumes (e.g., volumes)

  • Ceph user with proper permissions (e.g., client.cinder)

Ceph Packages on Cinder Host:

Install Ceph client tools on Block Storage host:

Ceph Keyring & Config on Block Storage host:

You must place the following files on the Cinder node:

  • /etc/ceph/ceph.conf: Ceph cluster config (includes MON addresses)

  • /etc/ceph/ceph.client.cinder.keyring: Keyring file for Block Storage user

Ceph Pool Configuration:

Create a dedicated RBD pool in Ceph for Cinder and optionally setup replication etc as needed.

Grant access to client.cinder:

Nova Compute host Integration:

For Ceph volumes to be attached to Virtual machines , Nova compute host must also:

  • Have ceph-common installed

  • Access the Ceph cluster

  • Be configured with the same client.cinder user or secret UUID via libvirt

Ensure /etc/ceph/ceph.conf and keyring are present on compute nodes.

Create Libvirt Secret for Ceph (Nova Integration)

Required for volume attachment to instances:

Example secret.xml:

NVMe-oF

NVMe-oF (NVMe over Fabrics) is a high-performance storage protocol that allows remote access to NVMe devices over a fabric like RDMA (RoCE/iWARP), TCP, or Fibre Channel.

In the context of PCD Block Storage service , NVMe-oF is still maturing and typically requires vendor-specific drivers and hardware, but here's what you need at a high level.

Requires NVMe target setup

Required Packages (on Block Storage and Hypervisor hosts)

NVMe-oF Initiator Configuration (Client Side)

For NVMe-oF over TCP:

Load NVMe TCP kernel module

Discover and connect to NVMe subsystem:

Additionally , there is NVMe target config that depends on the vendor, Options include

  • SPDK (Software-based NVMe-oF Target)

  • NVMe-oF native targets on enterprise storage (e.g., Dell, NetApp, HPE

  • Linux NVMe target (nvmet)

SAN with iSCSI protocol

Using iSCSI SAN storage with PCD Block Storage service allows volumes to be provisioned and served over the network to VMs. iSCSI is widely supported by enterprise SAN vendors (e.g., NetApp, Dell EMC, HPE, etc.).

PCD Block Storage service integrates with SAN via vendor-specific iSCSI drivers, so the setup involves:

  • Preparing the host OS

  • Ensuring iSCSI connectivity

  • Proper Block storage backend configuration

  • (Optional) Multipath setup for redundancy

Hardware and Network Pre-requisites:

SAN system supporting iSCSI protocol

Dedicated /shared storage network subnet to ensure connectivity between the Block storage hosts and SAN Storage.

Zoning config to ensure that the host initiators must have access to proper LUNs

Required Packages

Install these on both Block Storage and Hypervisor hosts :

Enable iSCSI Services

Set Initiator Name

Define the unique initiator name in /etc/iscsi/initiatorname.iscsi .

Example:

InitiatorName=iqn.1994-05.com.ubuntu:node01

This name must be registered/configured on the SAN for LUN masking.

Follow in case the initiator names are not unique.

Last updated

Was this helpful?