# Dell EMC Storage Configurations

### Overview

Dell EMC arrays provide flexible, high-performance storage solutions for hybrid and enterprise data centers.  <code class="expression">space.vars.product\_name</code> supports the following Dell EMC storage arrays through their respective drivers:

* **Dell Unity** - Unified storage over iSCSI and Fibre Channel
* **Dell Compellent (SC Series)** - Block storage with automated tiering over iSCSI
* **Dell Powerstore -** Block storage over Fibre Channel
* **Dell Powermax -** Enterprise block storage over Fibre Channel

These configuration examples cover block storage setups for each array type, outlining prerequisites, backend definitions, and optional parameters for enabling thin provisioning and data-reduction features.

Select a storage array and protocol below to view the complete configuration steps for your environment.

{% tabs %}
{% tab title="Dell Unity - iSCSI" %}
Dell Unity arrays provide unified storage supporting both block and file protocols. iSCSI offers block storage connectivity over standard Ethernet infrastructure.

#### Prerequisites

Before you configure the Unity iSCSI backend, complete these requirements:

* Install `open-iscsi` on all hosts.
* Configure consistent MTU settings across the network (jumbo frames recommended).
* Configure iSCSI ports on the Unity array.
* Configure iSCSI connectivity between hosts and the Unity array.

#### Volume backend configuration

```ini
[unity_iscsi_backend]
volume_driver = cinder.volume.drivers.dell_emc.unity.Driver
volume_backend_name = unity_iscsi
storage_protocol = iSCSI
san_ip = <UNITY_MGMT_IP>
san_login = <UNITY_ADMIN_USER>
san_password = <UNITY_ADMIN_PASSWORD>
unity_storage_pool_names = <POOL_NAME>
volumes_dir = /opt/pf9/etc/pf9-cindervolume-base/volumes/
use_multipath_for_image_xfer = true
```

#### Special considerations

* Multiple iSCSI paths are discovered automatically.
* Supports IPv4 and IPv6.
* Maintain uniform MTU and initiator settings across all hosts.

{% endtab %}

{% tab title="Dell Unity - Fibre Channel" %}
Dell Unity arrays provide unified storage supporting both block and file protocols. Fibre Channel offers consistent performance for high-IO workloads.

#### Prerequisites

Before you configure the Unity Fibre Channel backend, complete these requirements:

* Install Fibre Channel HBAs on all hosts.
* Enable FC ports on the Unity array.
* Configure FC zoning between hosts and the Unity array.
* Register host initiators in Unity.

#### Volume backend configuration

```ini
[unity_fc_backend]
volume_driver = cinder.volume.drivers.dell_emc.unity.Driver
volume_backend_name = unity_fc
storage_protocol = FC
san_ip = <UNITY_MGMT_IP>
san_login = <UNITY_ADMIN_USER>
san_password = <UNITY_ADMIN_PASSWORD>
unity_storage_pool_names = <POOL_NAME>
# Specify FC ports to use, or omit to use all available ports
unity_io_ports = <FC_PORT_ID_1>,<FC_PORT_ID_2>
volumes_dir = /opt/pf9/etc/pf9-cindervolume-base/volumes/
use_multipath_for_image_xfer = true
# Thin provisioning: set to 'false' for thin provisioning (recommended), 'true' for thick
unity_thick_provisioning = false
# Data reduction: requires All-Flash array
unity_data_reduction = true
```

#### Special considerations

* Supported models: Unity 450F and 650F.
* Thin provisioning is recommended for space efficiency.
* Data reduction requires an All-Flash array.
* Minimum OE version: 4.1 or later.
  {% endtab %}

{% tab title="Dell Compellent (SC Series ) - iSCSI" %}
Dell Compellent arrays offer automated tiering and Dell Storage Manager (DSM) based management.

#### Prerequisites

Before you configure the Compellent iSCSI backend, complete these requirements:

* Install `open-iscsi` on all hosts.
* Configure Dell Storage Manager (DSM).
* Create volume and server folders in DSM.
* Configure iSCSI connectivity between hosts and the Compellent array.
* Grant the DSM user access to the volume folder.

#### Volume backend configuration

```ini
[compellent_iscsi_backend]
volume_driver = cinder.volume.drivers.dell_emc.sc.storagecenter_iscsi.SCISCSIDriver
volume_backend_name = compellent_iscsi
san_ip = <COMPELLENT_MGMT_IP>
san_login = <DSM_USER>
san_password = <DSM_PASSWORD>
# Storage Center serial number (System ID)
dell_sc_ssn = <STORAGE_CENTER_SERIAL>
dell_sc_api_port = 3033
dell_sc_volume_folder = <FOLDER_PATH>
dell_sc_server_folder = <SERVER_FOLDER>
```

{% hint style="info" %}
The DSM user specified in `san_login` must have access to `dell_sc_volume_folder`. Otherwise, backend initialization fails.
{% endhint %}

#### Special considerations

* `dell_sc_ssn` is the Storage Center serial number (System ID).
* Volume and server folders must exist in DSM before configuring the backend.
* Automated tiering is enabled by default.
  {% endtab %}

{% tab title="Dell Powerstore - Fibre Channel" %}
Dell Powerstore provides modern, high-performance block storage with Fibre Channel connectivity.

#### Prerequisites

Before you configure the Powerstore Fibre Channel backend, complete these requirements:

* Install Fibre Channel HBAs on all hosts.
* Enable FC ports on the Powerstore array.
* Obtain the WWPNs for the FC ports you want to use.
* Configure FC zoning between hosts and the Powerstore array.

#### Volume backend configuration

```ini
[powerstore_fc_backend]
volume_driver = cinder.volume.drivers.dell_emc.powerstore.driver.PowerStoreDriver
volume_backend_name = powerstore_fc
storage_protocol = FC
san_ip = <POWERSTORE_MGMT_IP>
san_login = <POWERSTORE_ADMIN_USER>
san_password = <POWERSTORE_ADMIN_PASSWORD>
# Comma-separated list of FC port WWPNs to use
powerstore_ports = <FC_PORT_WWPN_1>,<FC_PORT_WWPN_2>,<FC_PORT_WWPN_3>,<FC_PORT_WWPN_4>
```

#### Special considerations

* Specify FC port WWPNs in the `powerstore_ports` parameter.
* Supports volume creation, attachment, detachment, extension, and bootable volumes.
  {% endtab %}

{% tab title="Dell Powermax - Fibre Channel" %}
Dell Powermax provides enterprise-grade block storage with Fibre Channel connectivity and tiered storage capabilities.

#### Prerequisites

Before you configure the Powermax Fibre Channel backend, complete these requirements:

* Install Fibre Channel HBAs on all hosts.
* Configure Unisphere for array management with PRO SOFTWARE package enabled.
* Create port groups on the Powermax array.
* Obtain the array serial number and SRP name.
* Configure FC zoning between hosts and the Powermax array.

#### Volume backend configuration

```ini
[powermax_fc_backend]
volume_driver = cinder.volume.drivers.dell_emc.powermax.fc.PowerMaxFCDriver
volume_backend_name = powermax_fc
# Unisphere management IP (not the array IP)
san_ip = <UNISPHERE_MGMT_IP>
san_login = <UNISPHERE_USER>
san_password = <UNISPHERE_PASSWORD>
# Powermax array serial number
powermax_array = <ARRAY_SERIAL_NUMBER>
# Port group name created on the array
powermax_port_groups = [<PORT_GROUP_NAME>]
# Storage Resource Pool name
powermax_srp = <SRP_NAME>
use_multipath_for_image_xfer = true
```

#### Configure volume types with service levels

Powermax supports tiered storage through Service Level Objectives (SLO). Configure volume types to use specific service levels by adding the `pool_name` property.

To create a volume type with a specific service level, set the `pool_name` extra spec in the following format:

```
pool_name=<SLO>+<SRP>+<ARRAY_SERIAL>
```

For example, to create a Platinum tier volume type:

```
pool_name=Platinum+SRP_1+000197601478
```

Available service levels include Platinum, Gold, Silver, Bronze, and Diamond, depending on your array configuration.

#### Special considerations

* The `san_ip` parameter points to the Unisphere host, not the array directly.
* PRO SOFTWARE package is required for API-based management.
* The array must be pre-zoned before configuration.
* Port groups must exist on the array before configuring the backend.
* The Powermax driver uses its own host naming convention for storage groups and masking views.
* Use `multipath-tools` on Ubuntu instead of PowerPath for multipathing.
  {% endtab %}
  {% endtabs %}
