# Block Storage Pre-requisites

<code class="expression">space.vars.product\_name</code> block storage service can integrate with a wide variety of enterprise storage backends and protocols (iSCSI, NFS, FC, RBD, NVMe-oF, etc.). Each protocol has a different set of prerequisites in terms of packages, configuration, and infrastructure.

## LVM over iSCSI

### Packages required:

`lvm2`,`targetcli` or `tgt`

Use the commands below to install these packages.

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo apt install lvm2
$ sudo apt install targetcli-fb
#OR 
$ sudo apt install tgt
```

{% endtab %}
{% endtabs %}

### System Pre-requisites:

A dedicated volume group (VG) named `cinder-volumes` is required by the LVM driver. This volume group must be created before the block storage service is deployed.

Steps to configure the volume group on the block storage host:

1. Identify the block device to be used (e.g., `/dev/sdb`).
2. Create a Physical Volume (PV) and then the Volume Group (VG):

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo pvcreate /dev/<dev name> && vgcreate /dev/<dev_name> cinder-volumes
```

{% endtab %}
{% endtabs %}

*Example:*

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo pvcreate /dev/sdb && vgcreate /dev/sdb cinder-volumes
```

{% endtab %}
{% endtabs %}

Ensure that the iSCSI target service is enabled and running, as it is required to expose LVM volumes as iSCSI targets. This should be enabled on any host with block storage role applied for LVM.

For targetcli ( LIO ):

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo systemctl enable --now target
```

{% endtab %}
{% endtabs %}

For `tgt:`

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo systemctl enable --now target
```

{% endtab %}
{% endtabs %}

Verify service status:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo systemctl status target # for targetcli 
$ sudo systemctl status tgtd # for tgt
```

{% endtab %}
{% endtabs %}

Additionally , the `iscsid` daemon should be running on all hosts (hypervisors and hosts with block storage service role assigned) as this service manages iSCSI sessions and maintains connections to the iSCSI targets.

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo apt install -y open-iscsi
$ sudo systemctl enable --now iscsid
$ sudo systemctl status iscsid
```

{% endtab %}
{% endtabs %}

## NFS

### Packages required:

`nfs-common`

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo apt install -y nfs-common
```

{% endtab %}
{% endtabs %}

### System Prerequisites:

The nfs backend enables the block storage service to use shared nfs volumes for provisioning persistent volumes. This is a common setup for environments that already use nfs for centralized storage.

On the nfs server (not the host with block storage service role assigned), run the following command:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo apt install -y nfs-kernel-server
```

{% endtab %}
{% endtabs %}

Create the export directory:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo mkdir -p /export/cinder
$ sudo chown -R nfsnobody:nfsnobody /export/cinder
$ sudo chmod 777 /export/cinder
```

{% endtab %}
{% endtabs %}

Add export to /etc/exports:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo cat /etc/exports
/export/cinder  *(rw,sync,no_root_squash,no_subtree_check)
```

{% endtab %}
{% endtabs %}

Export the share and start the service

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo exportfs -rav 
$ sudo systemctl enable --now nfs-server
```

{% endtab %}
{% endtabs %}

On all other hosts (hosts with block storage service and hypervisor roles assigned):

The nfs client package should be installed on the hosts with block storage service and hypervisor roles assigned to be able to mount the share to access these volumes

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo apt install -y nfs-common
```

{% endtab %}
{% endtabs %}

This provides:

* nfs client binaries (`mount.nfs`, etc.)
* Support for NFSv3 and NFSv4
* No server components (safe for client-only use)

## Fibre Channel (FC)

The fibre channel (FC) backend enables the block storage service to provision block storage from storage arrays that support fiber channel protocol (HPE/DELL etc ). This setup requires both hardware (FC HBAs, switches, and SAN) and software (drivers, multipath, etc.) configured on compute and storage nodes.

### Hardware Prerequisites

* **Fibre Channel SAN** with LUNs provisioned
* **FC Switches** (Zoning configured)
* **Fibre Channel HBA (Host Bus Adapters)** installed and connected on:
  * Block Storage hosts
  * Hypervisor hosts

Use `lspci` to verify FC HBA:

### Required Packages

Install these on **both Hypervisor and Block storage nodes**:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo apt install -y sg3-utils multipath-tools
```

{% endtab %}
{% endtabs %}

### Enable & Configure Multipath I/O (MPIO)

Multipath ensures high availability and performance across multiple FC paths.

Enable multipath service:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo systemctl enable --now multipathd
```

{% endtab %}
{% endtabs %}

Create multipath configuration with any storage vendor specific recommended config.

File: `/etc/multipath.conf` basic example for HPE 3PAR Storage

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo cat /etc/multipath.conf
defaults {
        user_friendly_names yes
        find_multipaths yes
        polling_interval 10
}

devices {
       device {
               vendor                  "3PARdata"
               product                 "VV"
               path_grouping_policy    "group_by_prio"
               path_selector           "round-robin 0"
               path_checker            "tur"
               features                "0"
               hardware_handler        "1 alua"
               prio                    "alua"
               rr_weight               uniform
               no_path_retry           18
               rr_min_io               100
       }
}

blacklist {
}
```

{% endtab %}
{% endtabs %}

Then reload:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo systemctl restart multipathd
```

{% endtab %}
{% endtabs %}

Check multipath devices:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo multipath -ll
```

{% endtab %}
{% endtabs %}

## Ceph

The <code class="expression">space.vars.product\_acronym</code> Block Storage service integrates with **Ceph RADOS Block Device (RBD)** to provide distributed, scalable block storage.

You need a working **Ceph cluster** before integrating it with <code class="expression">space.vars.product\_acronym</code> Block Storage service. Note that <code class="expression">space.vars.product\_acronym</code> does not ship with or provide Ceph out of the box. You need to setup and manage Ceph storage separately.&#x20;

Ceph cluster must have:

* MONs (Monitors)
* OSDs (Object Storage Daemons)
* (Optional) MGRs, MDS (for advanced features)

### Requirements:

* Ceph version: Recommended **Nautilus** or newer (Octopus, Pacific, etc.)
* Network reachability from Cinder nodes to Ceph MONs
* A Ceph **pool** for Cinder volumes (e.g., `volumes`)
* Ceph user with proper permissions (e.g., `client.cinder`)

Ceph Packages on Cinder Host:

Install Ceph client tools on **Block Storage host**:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo apt install -y ceph-common
```

{% endtab %}
{% endtabs %}

Ceph Keyring & Config on Block Storage host:

You must place the following files on the Cinder node:

* `/etc/ceph/ceph.conf`: Ceph cluster config (includes MON addresses)
* `/etc/ceph/ceph.client.cinder.keyring`: Keyring file for Block Storage user

Ceph Pool Configuration:

Create a dedicated **RBD pool** in Ceph for Cinder and optionally setup replication etc as needed.

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo ceph osd pool create volumes 128 rbd pool init volumes
```

{% endtab %}
{% endtabs %}

Grant access to `client.cinder`:

{% tabs %}
{% tab title="Bash" %}

```bash
$ sudo ceph auth add client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes'
```

{% endtab %}
{% endtabs %}

### Compute host Integration:

For Ceph volumes to be attached to Virtual machines , **all your hypervisor hosts** must also:

* Have `ceph-common` installed
* Access to the Ceph cluster
* Be configured with the same `client.cinder` user or secret UUID via `libvirt`

Ensure `/etc/ceph/ceph.conf` and keyring are present on compute nodes.

Create Libvirt Secret for Ceph (Nova Integration)

Required for volume attachment to instances:

{% tabs %}
{% tab title="Bash" %}

```bash
# Create the libvirt secret 
$ sudo virsh secret-define --file secret.xml 
# Set the actual key 
$ sudo virsh secret-set-value --secret <UUID> --base64 $(sudo ceph auth get-key client.cinder)
```

{% endtab %}
{% endtabs %}

Example `secret.xml`:

{% tabs %}
{% tab title="Bash" %}

```bash
<secret ephemeral='no' private='no'>
  <uuid>your-uuid-here</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
```

{% endtab %}
{% endtabs %}

## NVMe-oF

**NVMe-oF** (NVMe over Fabrics) is a high-performance storage protocol that allows remote access to NVMe devices over a fabric like **RDMA (RoCE/iWARP)**, **TCP**, or **Fibre Channel**.

In the context of <code class="expression">space.vars.product\_acronym</code> Block Storage service , NVMe-oF is still maturing and typically requires **vendor-specific drivers and hardware**, but here's what you need at a high level.

Requires NVMe target setup

Required Packages (on Block Storage and Hypervisor hosts)

{% tabs %}
{% tab title="Bash" %}

```bash
sudo apt install -y nvme-cli
```

{% endtab %}
{% endtabs %}

### NVMe-oF Initiator Configuration (Client Side)

#### For **NVMe-oF over TCP**:

Load NVMe TCP kernel module

{% tabs %}
{% tab title="Bash" %}

```bash
sudo modprobe nvme-tcp
```

{% endtab %}
{% endtabs %}

Discover and connect to NVMe subsystem:

{% tabs %}
{% tab title="Bash" %}

```bash
sudo nvme discover -t tcp -a <target_ip> -s <port> sudo nvme connect -t tcp -n <subsysnqn> -a <target_ip> -s <port>
```

{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="Bash" %}

```bash
sudo apt install -y nvme-cli
```

{% endtab %}
{% endtabs %}

Additionally , there is NVMe target config that depends on the vendor, Options include

* SPDK (Software-based NVMe-oF Target)
* NVMe-oF native targets on enterprise storage (e.g., Dell, NetApp, HPE
* Linux NVMe target (nvmet)

## SAN with iSCSI protocol

Using iSCSI SAN storage with <code class="expression">space.vars.product\_acronym</code> Block Storage service allows volumes to be provisioned and served over the network to VMs. iSCSI is widely supported by enterprise SAN vendors (e.g., NetApp, Dell EMC, HPE, etc.).

<code class="expression">space.vars.product\_acronym</code> Block Storage service integrates with SAN via vendor-specific iSCSI drivers, so the setup involves:

* Preparing the host OS
* Ensuring iSCSI connectivity
* Proper Block storage backend configuration
* (Optional) Multipath setup for redundancy

### Hardware and Network Pre-requisites:

SAN system supporting iSCSI protocol

Dedicated /shared storage network subnet to ensure connectivity between the Block storage hosts and SAN Storage.

Zoning config to ensure that the host initiators must have access to proper LUNs

### Required Packages

Install these on both **Block Storage and Hypervisor hosts** :

{% tabs %}
{% tab title="Bash" %}

```bash
sudo apt install -y open-iscsi multipath-tools
```

{% endtab %}
{% endtabs %}

### Enable iSCSI Services

{% tabs %}
{% tab title="Bash" %}

```bash
sudo systemctl enable --now iscsid sudo systemctl enable --now multipathd
```

{% endtab %}
{% endtabs %}

### Set Initiator Name

Define the unique initiator name in `/etc/iscsi/initiatorname.iscsi` .

Example:

`InitiatorName=iqn.1994-05.com.ubuntu:node01`

This name must be registered/configured on the SAN for LUN masking.

Follow in case the initiator names are not unique.
