Block Storage Pre-requisites

Private Cloud Director block storage service can integrate with a wide variety of enterprise storage backends and protocols (iSCSI, NFS, FC, RBD, NVMe-oF, etc.). Each protocol has a different set of prerequisites in terms of packages, configuration, and infrastructure.

LVM over iSCSI

Packages required:

lvm2,targetcli or tgt

Use the commands below to install these packages.

$ sudo apt install lvm2
$ sudo apt install targetcli-fb
#OR 
$ sudo apt install tgt

System Pre-requisites:

A dedicated Volume Group (VG) named cinder-volumes is required by the LVM driver. This VG must be created before the Block storage service is deployed.

Steps to configure the cinder-volumes volume group on the block storage host:

  1. Identify the block device to be used (e.g., /dev/sdb).

  2. Create a Physical Volume (PV) and then the Volume Group (VG):

$ sudo pvcreate /dev/<dev name> && vgcreate /dev/<dev_name> cinder-volumes

Example:

$ sudo pvcreate /dev/sdb && vgcreate /dev/sdb cinder-volumes

Ensure that the iSCSI target service is enabled and running, as it is required to expose LVM volumes as iSCSI targets. This should be enabled on any host with block storage role applied for LVM.

For targetcli ( LIO ):

$ sudo systemctl enable --now target

For tgt:

$ sudo systemctl enable --now target

Verify service status:

$ sudo systemctl status target # for targetcli 
$ sudo systemctl status tgtd # for tgt

Additionally , the iscsid daemon should be running on all hosts (hypervisors and hosts with block storage service role assigned) as this service manages iSCSI sessions and maintains connections to the iSCSI targets.

$ sudo apt install -y open-iscsi
$ sudo systemctl enable --now iscsid
$ sudo systemctl status iscsid

NFS

Packages required:

nfs-common

$ sudo apt install -y nfs-common

System Prerequisites:

The nfs backend enables the block storage service to use shared nfs volumes for provisioning persistent volumes. This is a common setup for environments that already use nfs for centralized storage.

On the nfs server (not the host with block storage service role assigned), run the following command:

$ sudo apt install -y nfs-kernel-server

Create the export directory:

$ sudo mkdir -p /export/cinder
$ sudo chown -R nfsnobody:nfsnobody /export/cinder
$ sudo chmod 777 /export/cinder

Add export to /etc/exports:

$ sudo cat /etc/exports
/export/cinder  *(rw,sync,no_root_squash,no_subtree_check)

Export the share and start the service

$ sudo exportfs -rav 
$ sudo systemctl enable --now nfs-server

On all other hosts (hosts with block storage service and hypervisor roles assigned):

The nfs client package should be installed on the hosts with block storage service and hypervisor roles assigned to be able to mount the share to access these volumes

$ sudo apt install -y nfs-common

This provides:

  • nfs client binaries (mount.nfs, etc.)

  • Support for NFSv3 and NFSv4

  • No server components (safe for client-only use)

Fibre Channel (FC)

The fibre channel (FC) backend enables the block storage service to provision block storage from storage arrays that support fiber channel protocol (HPE/DELL etc ). This setup requires both hardware (FC HBAs, switches, and SAN) and software (Cinder drivers, multipath, etc.) configured on compute and storage nodes.

Hardware Prerequisites

  • Fibre Channel SAN with LUNs provisioned

  • FC Switches ( Zoning configured )

  • Fibre Channel HBA ( Host Bus Adapters ) installed and connected on:

    • Block Storage hosts

    • Hypervisor hosts

Use lspci to verify FC HBA:

Required Packages

Install these on both Hypervisor and Block storage nodes:

$ sudo apt install -y sg3-utils multipath-tools

Enable & Configure Multipath I/O (MPIO)

Multipath ensures high availability and performance across multiple FC paths.

Enable multipath service:

$ sudo systemctl enable --now multipathd

Create multipath configuration with any storage vendor specific recommended config.

File: /etc/multipath.conf basic example for HPE 3PAR Storage

$ sudo cat /etc/multipath.conf
defaults {
        user_friendly_names yes
        find_multipaths yes
        polling_interval 10
}

devices {
       device {
               vendor                  "3PARdata"
               product                 "VV"
               path_grouping_policy    "group_by_prio"
               path_selector           "round-robin 0"
               path_checker            "tur"
               features                "0"
               hardware_handler        "1 alua"
               prio                    "alua"
               rr_weight               uniform
               no_path_retry           18
               rr_min_io               100
       }
}

blacklist {
}

Then reload:

$ sudo systemctl restart multipathd

Check multipath devices:

$ sudo multipath -ll

CEPH

The PCD Block Storage service integrates with Ceph RADOS Block Device (RBD) to provide distributed, scalable block storage.

You need a working Ceph cluster before integrating it with PCD Block Storage service.

Ceph cluster must have:

  • MONs ( Monitors )

  • OSDs ( Object Storage Daemons )

  • ( Optional ) MGRs, MDS ( for advanced features )

Requirements:

  • Ceph version: Recommended Nautilus or newer (Octopus, Pacific, etc.)

  • Network reachability from Cinder nodes to Ceph MONs

  • A Ceph pool for Cinder volumes (e.g., volumes)

  • Ceph user with proper permissions (e.g., client.cinder)

Ceph Packages on Cinder Host:

Install Ceph client tools on Block Storage host:

$ sudo apt install -y ceph-common

Ceph Keyring & Config on Block Storage host:

You must place the following files on the Cinder node:

  • /etc/ceph/ceph.conf: Ceph cluster config (includes MON addresses)

  • /etc/ceph/ceph.client.cinder.keyring: Keyring file for Block Storage user

Ceph Pool Configuration:

Create a dedicated RBD pool in Ceph for Cinder and optionally setup replication etc as needed.

$ sudo ceph osd pool create volumes 128 rbd pool init volumes

Grant access to client.cinder:

$ sudo ceph auth add client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes'

Nova Compute host Integration:

For Ceph volumes to be attached to Virtual machines , Nova compute host must also:

  • Have ceph-common installed

  • Access the Ceph cluster

  • Be configured with the same client.cinder user or secret UUID via libvirt

Ensure /etc/ceph/ceph.conf and keyring are present on compute nodes.

Create Libvirt Secret for Ceph (Nova Integration)

Required for volume attachment to instances:

# Create the libvirt secret 
$ sudo virsh secret-define --file secret.xml 
# Set the actual key 
$ sudo virsh secret-set-value --secret <UUID> --base64 $(sudo ceph auth get-key client.cinder)

Example secret.xml:

<secret ephemeral='no' private='no'>
  <uuid>your-uuid-here</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>

<secret ephemeral='no' private='no'> <uuid>your-uuid-here</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret>

NVMe-oF

NVMe-oF (NVMe over Fabrics) is a high-performance storage protocol that allows remote access to NVMe devices over a fabric like RDMA (RoCE/iWARP), TCP, or Fibre Channel.

In the context of PCD Block Storage service , NVMe-oF is still maturing and typically requires vendor-specific drivers and hardware, but here's what you need at a high level.

Requires NVMe target setup

Required Packages (on Block Storage and Hypervisor hosts)

sudo apt install -y nvme-cli

NVMe-oF Initiator Configuration (Client Side)

For NVMe-oF over TCP:

Load NVMe TCP kernel module

sudo modprobe nvme-tcp

Discover and connect to NVMe subsystem:

sudo nvme discover -t tcp -a <target_ip> -s <port> sudo nvme connect -t tcp -n <subsysnqn> -a <target_ip> -s <port>
sudo apt install -y nvme-cli

Additionally , there is NVMe target config that depends on the vendor, Options include

  • SPDK (Software-based NVMe-oF Target)

  • NVMe-oF native targets on enterprise storage (e.g., Dell, NetApp, HPE

  • Linux NVMe target (nvmet)

SAN with iSCSI protocol

Using iSCSI SAN storage with PCD Block Storage service allows volumes to be provisioned and served over the network to VMs. iSCSI is widely supported by enterprise SAN vendors (e.g., NetApp, Dell EMC, HPE, etc.).

PCD Block Storage service integrates with SAN via vendor-specific iSCSI drivers, so the setup involves:

  • Preparing the host OS

  • Ensuring iSCSI connectivity

  • Proper Block storage backend configuration

  • (Optional) Multipath setup for redundancy

Hardware and Network Pre-requisites:

SAN system supporting iSCSI protocol

Dedicated /shared storage network subnet to ensure connectivity between the Block storage hosts and SAN Storage.

Zoning config to ensure that the host initiators must have access to proper LUNs

Required Packages

Install these on both Block Storage and Hypervisor hosts :

sudo apt install -y open-iscsi multipath-tools

Enable iSCSI Services

sudo systemctl enable --now iscsid sudo systemctl enable --now multipathd

Set Initiator Name

Define the unique initiator name in /etc/iscsi/initiatorname.iscsi .

Example:

InitiatorName=iqn.1994-05.com.ubuntu:node01

This name must be registered/configured on the SAN for LUN masking.

Follow in case the initiator names are not unique.

Last updated

Was this helpful?