# Infinidat InfiniBox Storage Configurations

Infinidat InfiniBox storage systems provide enterprise-grade block storage with high availability and performance. Platform9 <code class="expression">space.vars.product\_name</code> integrates with InfiniBox arrays through the Infinidat driver, supporting iSCSI connectivity for volume operations, including provisioning, snapshots, cloning, and live migration.

{% hint style="info" %}
Boot from SAN is not supported&#x20;

It is not recommended to use Infinidat storage for boot-from-SAN (booting VMs directly from SAN volumes). The InfiniBox WWN (World Wide Name) presentation behavior causes volume attachment failures during VM boot. Boot volumes must use local storage or supported boot-from-volume configurations. Contact Platform9 support before attempting any SAN boot configurations.
{% endhint %}

### &#x20;Prerequisites

Before you configure the Infinidat iSCSI backend, complete these requirements:

* Administrative access to the InfiniBox storage array management interface.
* Create a storage pool on the InfiniBox array. Platform9 provisions all volumes from the specified pool.
* Create a dedicated user account with pool administrator privileges on the InfiniBox array. This account requires permissions to create, delete, and manage volumes within the designated pool.
* Configure one or more iSCSI network spaces on the InfiniBox array. Network spaces define the iSCSI portal IPs that hosts use to access storage. Refer to InfiniBox documentation for network space configuration procedures.
* Ensure each node has a unique IQN (iSCSI Qualified Name). No two nodes should share the same IQN.
* Configure all iSCSI initiators to automatically log in to targets during node boot.
* For multipath configurations, use two Ethernet ports with IP addresses from the same subnet to connect to the storage target.
* Install required packages on all hypervisor hosts and block-storage hosts:

```bash
  apt-get install open-iscsi multipath-tools
```

{% hint style="info" %}
The `infinisdk` Python package is automatically installed and managed by Platform9's block storage service. Manual installation is not required.
{% endhint %}

* Verify network connectivity between Platform9 hosts and the InfiniBox management interface and iSCSI network spaces.

### Configure an Infinidat iSCSI backend

To configure Infinidat InfiniBox as your block storage backend, add the volume backend configuration to your cluster blueprint.

#### Configuration parameters

* **san\_ip** (Required): Management IP address of the InfiniBox array. Used for all API communication with the storage system.
* **san\_login** (Required): Username for InfiniBox authentication. Must have pool administrator privileges.
* **san\_password** (Required): Password for InfiniBox authentication.
* **infinidat\_pool\_name** (Required): Name of the storage pool on the InfiniBox array for volume provisioning.
* **infinidat\_iscsi\_netspaces** (Required): Comma-separated list of iSCSI network space names for iSCSI portal IPs connectivity.
* **infinidat\_storage\_protocol** (Required): Storage protocol type, set to `iscsi`.
* **driver\_use\_ssl** (Optional): Enables HTTPS for API communication, set to `true` for HTTPS or `false` for HTTP. Default: `false`.
* **suppress\_requests\_ssl\_warnings** (Optional): Suppresses SSL certificate warnings for self-signed certificates. Default: `true`.
* **san\_thin\_provision** (Optional): Enables thin provisioning for volumes, set to `true`. Default: `true`.
* **infinidat\_use\_compression** (Optional): Controls volume compression for new volumes. Default: `false`.
* **use\_multipath\_for\_image\_xfer** (Optional): Enables multipath for image transfers. Default: `true`.
* **enforce\_multipath\_for\_xfer** (Optional): Requires multipath connectivity for volume operations. Default: `true`.
* **image\_volume\_cache\_enabled** (Optional): Enables image-volume caching to reduce transfer time. Default: `true`.

**Image caching limitation**

Image volume caching may not function as expected. VMs may download images independently instead of using cached volumes. Monitor cache usage to verify caching behavior in your environment.

* **image\_volume\_cache\_max\_count** (Optional): Maximum number of cached image volumes. Default: `50`.
* **image\_volume\_cache\_max\_size\_gb** (Optional): Maximum total size of cached image volumes in GB. Default: `200`.

#### Example configuration

Use the following JSON structure when adding a volume backend configuration in your cluster blueprint:

```json
{
  "": {
    "": {
      "config": {
        "san_ip": "",
        "san_login": "",
        "san_password": "",
        "infinidat_pool_name": "",
        "infinidat_iscsi_netspaces": "",
        "infinidat_storage_protocol": "iscsi",
        "driver_use_ssl": false,
        "suppress_requests_ssl_warnings": true,
        "san_thin_provision": true,
        "infinidat_use_compression": false,
        "use_multipath_for_image_xfer": true,
        "enforce_multipath_for_xfer": true,
        "image_volume_cache_enabled": true,
        "image_volume_cache_max_count": 50,
        "image_volume_cache_max_size_gb": 200
      },
      "driver": "InfinidatISCSI"
    }
  }
}
```

Replace the placeholder values with your specific configuration:

* **\<volume\_type\_name>**: A descriptive name for the volume type (for example, `infinidat-iscsi`).
* **\<volume\_backend\_name>**: A unique identifier for this backend configuration.
* **\<infinibox\_management\_ip>**: The IP address of your InfiniBox management interface.
* and : Credentials for the InfiniBox user account.
* **\<pool\_name>**: The name of the storage pool you created on InfiniBox.
* **\<netspace\_name>**: The name of the iSCSI network space. Specify multiple network spaces as a comma-separated list.

#### Add the backend to your cluster blueprint

To add the driver to Cluster Blueprint on UI see [#create-a-volume-type](https://docs.platform9.com/private-cloud-director/2025.10/storage/block-storage/volume-backend-configuration-examples/..#create-a-volume-type "mention") for more details.

### Multipath configuration

#### Configure multipath daemon

Configure the multipath daemon on all hypervisor hosts and block-storage hosts in `/etc/multipath.conf`:

```ini
defaults {
    user_friendly_names no
    find_multipaths yes
}

blacklist {
    wwid 
}
```

Replace `<wwid-of-local-disk>` with your local disk's WWID. Find the WWID using:

```bash
/lib/udev/scsi_id -gud /dev/sda
```

Restart the multipath daemon:

```bash
sudo systemctl restart multipath-tools
```

#### Compute multipath configuration

For iSCSI multipath support, configure the hypervisor role on all hypervisor hosts:

Add the following to `/opt/pf9/etc/nova/conf.d/nova_override.conf`:

```ini
[libvirt]
iscsi_use_multipath = true
```

Restart the compute service:

```bash
sudo systemctl restart pf9-ostackhost
```

#### Image multipath configuration

For iSCSI multipath support, configure the image service on all image service hosts:

Add the following to `/opt/pf9/etc/glance/conf.d/glance-api.conf`:

```ini
[DEFAULT]
cinder_use_multipath = True
```

Restart the image service:

```bash
sudo systemctl restart pf9-glance-api
```

### Configuration notes

* Compression settings for new volumes inherit from the parent storage pool configuration by default. Use the `infinidat_use_compression` parameter to override pool defaults for all new volumes.
* Multipath is strongly recommended for production environments to provide redundancy and higher availability during storage failures.
* Specify multiple iSCSI network spaces in `infinidat_iscsi_netspaces` for multipath redundancy.
* Verify all iSCSI network space IPs are accessible from hypervisor hosts before configuring the backend.
