Infinidat InfiniBox Storage Configurations
Infinidat InfiniBox storage systems provide enterprise-grade block storage with high availability and performance. Platform9 Private Cloud Director integrates with InfiniBox arrays through the Infinidat driver, supporting iSCSI connectivity for volume operations, including provisioning, snapshots, cloning, and live migration.
Boot from SAN is not supported
It is not recommended to use Infinidat storage for boot-from-SAN (booting VMs directly from SAN volumes). The InfiniBox WWN (World Wide Name) presentation behavior causes volume attachment failures during VM boot. Boot volumes must use local storage or supported boot-from-volume configurations. Contact Platform9 support before attempting any SAN boot configurations.
Prerequisites
Before you configure the Infinidat iSCSI backend, complete these requirements:
Administrative access to the InfiniBox storage array management interface.
Create a storage pool on the InfiniBox array. Platform9 provisions all volumes from the specified pool.
Create a dedicated user account with pool administrator privileges on the InfiniBox array. This account requires permissions to create, delete, and manage volumes within the designated pool.
Configure one or more iSCSI network spaces on the InfiniBox array. Network spaces define the iSCSI portal IPs that hosts use to access storage. Refer to InfiniBox documentation for network space configuration procedures.
Ensure each node has a unique IQN (iSCSI Qualified Name). No two nodes should share the same IQN.
Configure all iSCSI initiators to automatically log in to targets during node boot.
For multipath configurations, use two Ethernet ports with IP addresses from the same subnet to connect to the storage target.
Install required packages on all hypervisor hosts and block-storage hosts:
apt-get install open-iscsi multipath-toolsThe infinisdk Python package is automatically installed and managed by Platform9's block storage service. Manual installation is not required.
Verify network connectivity between Platform9 hosts and the InfiniBox management interface and iSCSI network spaces.
Configure an Infinidat iSCSI backend
To configure Infinidat InfiniBox as your block storage backend, add the volume backend configuration to your cluster blueprint.
Configuration parameters
san_ip (Required): Management IP address of the InfiniBox array. Used for all API communication with the storage system.
san_login (Required): Username for InfiniBox authentication. Must have pool administrator privileges.
san_password (Required): Password for InfiniBox authentication.
infinidat_pool_name (Required): Name of the storage pool on the InfiniBox array for volume provisioning.
infinidat_iscsi_netspaces (Required): Comma-separated list of iSCSI network space names for iSCSI portal IPs connectivity.
infinidat_storage_protocol (Required): Storage protocol type, set to
iscsi.driver_use_ssl (Optional): Enables HTTPS for API communication, set to
truefor HTTPS orfalsefor HTTP. Default:false.suppress_requests_ssl_warnings (Optional): Suppresses SSL certificate warnings for self-signed certificates. Default:
true.san_thin_provision (Optional): Enables thin provisioning for volumes, set to
true. Default:true.infinidat_use_compression (Optional): Controls volume compression for new volumes. Default:
false.use_multipath_for_image_xfer (Optional): Enables multipath for image transfers. Default:
true.enforce_multipath_for_xfer (Optional): Requires multipath connectivity for volume operations. Default:
true.image_volume_cache_enabled (Optional): Enables image-volume caching to reduce transfer time. Default:
true.
Image caching limitation
Image volume caching may not function as expected. VMs may download images independently instead of using cached volumes. Monitor cache usage to verify caching behavior in your environment.
image_volume_cache_max_count (Optional): Maximum number of cached image volumes. Default:
50.image_volume_cache_max_size_gb (Optional): Maximum total size of cached image volumes in GB. Default:
200.
Example configuration
Use the following JSON structure when adding a volume backend configuration in your cluster blueprint:
Replace the placeholder values with your specific configuration:
<volume_type_name>: A descriptive name for the volume type (for example,
infinidat-iscsi).<volume_backend_name>: A unique identifier for this backend configuration.
<infinibox_management_ip>: The IP address of your InfiniBox management interface.
and : Credentials for the InfiniBox user account.
<pool_name>: The name of the storage pool you created on InfiniBox.
<netspace_name>: The name of the iSCSI network space. Specify multiple network spaces as a comma-separated list.
Add the backend to your cluster blueprint
To add the driver to Cluster Blueprint on UI see Create a volume type for more details.
Multipath configuration
Configure multipath daemon
Configure the multipath daemon on all hypervisor hosts and block-storage hosts in /etc/multipath.conf:
Replace <wwid-of-local-disk> with your local disk's WWID. Find the WWID using:
Restart the multipath daemon:
Compute multipath configuration
For iSCSI multipath support, configure the hypervisor role on all hypervisor hosts:
Add the following to /opt/pf9/etc/nova/conf.d/nova_override.conf:
Restart the compute service:
Image multipath configuration
For iSCSI multipath support, configure the image service on all image service hosts:
Add the following to /opt/pf9/etc/glance/conf.d/glance-api.conf:
Restart the image service:
Configuration notes
Compression settings for new volumes inherit from the parent storage pool configuration by default. Use the
infinidat_use_compressionparameter to override pool defaults for all new volumes.Multipath is strongly recommended for production environments to provide redundancy and higher availability during storage failures.
Specify multiple iSCSI network spaces in
infinidat_iscsi_netspacesfor multipath redundancy.Verify all iSCSI network space IPs are accessible from hypervisor hosts before configuring the backend.
Last updated
Was this helpful?
