# NetApp Storage Configurations

### Overview <a href="#overview" id="overview"></a>

NetApp ONTAP arrays are enterprise-grade storage systems that integrate with <code class="expression">space.vars.product\_name</code> through the NetApp storage driver. They support iSCSI, Fibre Channel (FC), and NFS connectivity protocols to accommodate varied performance and deployment needs.

NetApp ONTAP arrays feature thin provisioning, deduplication, and compression. Deduplication and compression are enabled by default and require no special configuration in the storage backend or volume type.

Select a protocol below to view the complete configuration steps for your environment.

{% tabs %}
{% tab title="NetApp ONTAP - iSCSI " %}
NetApp ONTAP iSCSI provides block storage connectivity for enterprise workloads using standard Ethernet infrastructure.

#### Prerequisites

Before you configure the iSCSI backend, complete these requirements:

* Verify that the hypervisor operating system is not SAN-booted.
* Configure iSCSI connectivity between all hosts and the NetApp array.
* Configure a Storage Virtual Machine (SVM) with the iSCSI protocol enabled.
* Create a dedicated NetApp volume (vvol) on the SVM aggregate for the OpenStack storage pool. Size the volume at 1 TiB or larger, depending on your use case.
* Obtain vsadmin or equivalent credentials with edit access on the SVM.
* Grant LUN provisioning permissions to the specified user.

#### Retrieve the SVM SSL certificate

iSCSI requires HTTPS connectivity to the NetApp array. Run the following command on each storage node to export the SVM's public SSL certificate:

```bash
openssl s_client -showcerts -connect <SVM_HOSTNAME>:443 </dev/null 2>/dev/null | openssl x509 -outform PEM > /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
chown pf9:pf9group /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
chmod 644 /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
```

Replace `<SVM_HOSTNAME>` with your SVM management LIF hostname or IP address.

#### Configure multipathing

Install the required multipathing packages on all hypervisors and storage nodes:

```bash
apt install lsscsi sg3-utils multipath-tools scsitools
```

Create or update `/etc/multipath.conf` with the following configuration. Exclude your local disk from multipathing to prevent conflicts:

```conf
defaults {
    user_friendly_names no
    find_multipaths yes
}

blacklist {
    wwid <WWID_OF_LOCAL_DISK>
}
```

To find the WWID of your local disk, run:

```bash
/lib/udev/scsi_id -gud /dev/sda
```

Restart the multipath service:

```bash
systemctl restart multipath-tools
```

#### Configure LVM filtering

LVM filtering prevents the host agent from hanging while running LVM commands. By default, LVM scans all devices, including multipath devices. Configure LVM to scan only local disks.

Add the following lines to `/etc/lvm/lvm.conf`, replacing `/dev/sda` with your local disk device:

```conf
filter = [ "a|^/dev/sda[1-9]|", "r|.*|" ]
global_filter = [ "a|^/dev/sda[1-9]|", "r|.*|" ]
```

Update the `initramfs` and reboot the host:

```bash
update-initramfs -u -k all
reboot
```

#### Configure compute for multipathing

Add the following line to `/opt/pf9/etc/nova/conf.d/nova_override.conf` and restart `pf9-ostackhost`:

```ini
[libvirt]
iscsi_use_multipath = true
```

#### Configure pool name filtering

By default, the NetApp storage driver selects all volumes under the SVM as storage pools and creates LUNs on any volume in that SVM. If your SVM contains volumes used by other workloads, such as Kubernetes persistent volumes, this can cause data corruption.

Always specify `netapp_pool_name_search_pattern` to restrict storage to dedicated NetApp volumes. This parameter accepts a regular expression to match one or more volume names.

```ini
# Single volume
netapp_pool_name_search_pattern = (cinder_vol1)

# Multiple volumes across aggregates
netapp_pool_name_search_pattern = (cinder_n1_vol1|cinder_n2_vol1)
```

To verify which NetApp volumes belong to your storage pool, run:

```bash
cinder --insecure get-pools --detail
```

#### Volume backend configuration

```ini
[netapp_iscsi_backend]
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name = netapp_iscsi
netapp_storage_protocol = iscsi
netapp_storage_family = ontap_cluster
netapp_transport_type = https
netapp_server_hostname = <SVM_HOSTNAME>
netapp_server_port = 443
netapp_login = <SVM_ADMIN_USER>
netapp_password = <SVM_ADMIN_PASSWORD>
netapp_vserver = <SVM_NAME>
netapp_ssl_cert_path = /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
netapp_use_legacy_client = true
netapp_certificate_host_validation = false
netapp_pool_name_search_pattern = <VOLUME_PATTERN>
use_multipath_for_image_xfer = true
# Thin provisioning: set to 'disabled' for space efficiency, 'enabled' to reserve full LUN space
netapp_lun_space_reservation = disabled
netapp_driver_reports_provisioned_capacity = true
image_upload_use_cinder_backend = true
reserved_percentage = 20
max_over_subscription_ratio = 1.0
```

#### Optional parameters

Enable image-volume caching when you use a storage backend for image storage. Caching stores converted images locally on each storage host, which reduces image transfer time for subsequent volume creation requests. This is recommended for environments with a single images node and multiple storage nodes.

```ini
image_volume_cache_enabled = true
image_volume_cache_max_count = 50
image_volume_cache_max_size_gb = 200
```

#### Special considerations

* SnapMirror replication can be used for disaster recovery.
  {% endtab %}

{% tab title="NetApp ONTAP - Fibre Channel" %}

NetApp ONTAP Fibre Channel provides high-performance block storage connectivity for mission-critical workloads.

#### Prerequisites

Before you configure the Fibre Channel backend, complete these requirements:

* Verify that the hypervisor operating system is not SAN-booted.
* Install Fibre Channel HBAs on all hypervisors and block storage hosts.
* Configure FC zoning between hosts and NetApp storage on your FC switches.
* Configure a Storage Virtual Machine (SVM) with the FC protocol enabled.
* Create a dedicated NetApp volume (vvol) on the SVM aggregate for the OpenStack storage pool. Size the volume at 1 TiB or large,r depending on your use case.
* Obtain vsadmin or equivalent credentials with edit access on the SVM.

#### Retrieve the SVM SSL certificate

Fibre Channel requires HTTPS connectivity to the NetApp array. Run the following command on each storage node to export the SVM's public SSL certificate:

```bash
openssl s_client -showcerts -connect <SVM_HOSTNAME>:443 </dev/null 2>/dev/null | openssl x509 -outform PEM > /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
chown pf9:pf9group /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
chmod 644 /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
```

Replace `<SVM_HOSTNAME>` with your SVM management LIF hostname or IP address.

#### Configure multipathing

Install the required multipathing packages on all hypervisors and storage nodes:

```bash
apt install lsscsi sg3-utils multipath-tools scsitools
```

Create or update `/etc/multipath.conf` with the following configuration. Exclude your local disk from multipathing to prevent conflicts:

```conf
defaults {
    user_friendly_names no
    find_multipaths yes
}

blacklist {
    wwid <WWID_OF_LOCAL_DISK>
}
```

To find the WWID of your local disk, run:

```bash
/lib/udev/scsi_id -gud /dev/sda
```

Restart the multipath service:

```bash
systemctl restart multipath-tools
```

#### Configure LVM filtering

LVM filtering prevents the host agent from hanging while running LVM commands. By default, LVM scans all devices, including multipath devices. Configure LVM to scan only local disks.

Add the following lines to `/etc/lvm/lvm.conf`, replacing `/dev/sda` with your local disk device:

```conf
filter = [ "a|^/dev/sda[1-9]|", "r|.*|" ]
global_filter = [ "a|^/dev/sda[1-9]|", "r|.*|" ]
```

Update the initramfs and reboot the host:

```bash
update-initramfs -u -k all
reboot
```

#### Configure compute for multipathing

Add the following line to `/opt/pf9/etc/nova/conf.d/nova_override.conf` and restart `pf9-ostackhost`:

```ini
[libvirt]
volume_use_multipath = true
```

#### Configure pool name filtering

By default, the NetApp storage driver selects all volumes under the SVM as storage pools and creates LUNs on any volume in that SVM. If your SVM contains volumes used by other workloads, such as Kubernetes persistent volumes, this can cause data corruption.

Always specify `netapp_pool_name_search_pattern` to restrict storage to dedicated NetApp volumes. This parameter accepts a regular expression to match one or more volume names.

```ini
# Single volume
netapp_pool_name_search_pattern = (cinder_vol1)

# Multiple volumes across aggregates
netapp_pool_name_search_pattern = (cinder_n1_vol1|cinder_n2_vol1)
```

To verify which NetApp volumes belong to your storage pool, run:

```bash
cinder --insecure get-pools --detail
```

#### Volume backend configuration

```ini
[netapp_fc_backend]
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name = netapp_fc
netapp_storage_protocol = fc
netapp_storage_family = ontap_cluster
netapp_transport_type = https
netapp_server_hostname = <SVM_HOSTNAME>
netapp_server_port = 443
netapp_login = <SVM_ADMIN_USER>
netapp_password = <SVM_ADMIN_PASSWORD>
netapp_vserver = <SVM_NAME>
netapp_ssl_cert_path = /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
netapp_use_legacy_client = true
netapp_certificate_host_validation = false
netapp_pool_name_search_pattern = <VOLUME_PATTERN>
use_multipath_for_image_xfer = true
# Thin provisioning: set to 'disabled' for space efficiency, 'enabled' to reserve full LUN space
netapp_lun_space_reservation = disabled
netapp_driver_reports_provisioned_capacity = true
image_upload_use_cinder_backend = true
volumes_dir = /opt/pf9/etc/pf9-cindervolume-base/volumes/
```

#### Optional parameters

Enable image-volume caching when you use a storage backend for image storage. Caching stores converted images locally on each storage host, which reduces image transfer time for subsequent volume creation requests. This is recommended for environments with a single images node and multiple storage nodes.

```ini
image_volume_cache_enabled = true
image_volume_cache_max_count = 50
image_volume_cache_max_size_gb = 200
```

#### Special considerations

* The minimum supported ONTAP version is 9.1 or later.
* NVMe/FC requires additional configuration not covered in this guide.
  {% endtab %}

{% tab title="NetApp ONTAP - NFS" %}
NetApp ONTAP NFS provides file-based storage connectivity using standard network infrastructure.

#### Prerequisites

Before you configure the NFS backend, complete these requirements:

* Configure a Storage Virtual Machine (SVM) with the NFS protocol enabled.
* Obtain vsadmin or equivalent credentials with edit access on the SVM.
* Configure NFS exports on the NetApp SVM.
* Install `nfs-common` on all hosts.
* Verify connectivity to NFS data LIFs.

#### Retrieve the SVM SSL certificate

NFS requires HTTPS connectivity to the NetApp array. Run the following command on each storage node to export the SVM's public SSL certificate:

```bash
openssl s_client -showcerts -connect <SVM_HOSTNAME>:443 </dev/null 2>/dev/null | openssl x509 -outform PEM > /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
chown pf9:pf9group /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
chmod 644 /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
```

Replace `<SVM_HOSTNAME>` with your SVM management LIF hostname or IP address.

#### Configure pool name filtering

By default, the NetApp storage driver selects all volumes under the SVM as storage pools and creates LUNs on any volume in that SVM. If your SVM contains volumes used by other workloads, such as Kubernetes persistent volumes, this can cause data corruption.

Always specify `netapp_pool_name_search_pattern` to restrict storage to dedicated NetApp volumes. This parameter accepts a regular expression to match one or more volume names.

```ini
# Single volume
netapp_pool_name_search_pattern = (cinder_vol1)

# Multiple volumes across aggregates
netapp_pool_name_search_pattern = (P9_CinderPool_n[1,2]_Vol1)
```

To verify which NetApp volumes belong to your storage pool, run:

```bash
cinder --insecure get-pools --detail
```

#### Volume backend configuration

```ini
[netapp_nfs_backend]
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name = netapp_nfs
netapp_storage_protocol = nfs
netapp_storage_family = ontap_cluster
netapp_transport_type = https
netapp_server_hostname = <SVM_HOSTNAME>
netapp_server_port = 443
netapp_login = <SVM_ADMIN_USER>
netapp_password = <SVM_ADMIN_PASSWORD>
netapp_vserver = <SVM_NAME>
netapp_ssl_cert_path = /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
netapp_certificate_host_validation = false
nfs_shares_config = /opt/pf9/etc/pf9-cindervolume-base/nfs_shares.conf
nfs_mount_options = vers=3,rsize=262144,wsize=262144,nconnect=16,noatime,write=eager,lookupcache=pos
nas_secure_file_operations = false
nas_secure_file_permissions = false
reserved_percentage = 10
max_over_subscription_ratio = 20
volumes_dir = /opt/pf9/etc/pf9-cindervolume-base/volumes/
```

#### NFS shares configuration

Create `/opt/pf9/etc/pf9-cindervolume-base/nfs_shares.conf` with your NFS export paths:

```
<NFS_SERVER_IP>:/vol/<VOLUME_NAME>
```

#### Optional parameters

Enable image-volume caching when you use a storage backend for image storage. Caching stores converted images locally on each storage host, which reduces image transfer time for subsequent volume creation requests. This is recommended for environments with a single images node and multiple storage nodes.

```ini
image_volume_cache_enabled = true
image_volume_cache_max_count = 50
image_volume_cache_max_size_gb = 200
```

#### Special considerations

* The NFS driver supports NFS v3 and v4.
* FlexClone is used for space-efficient cloning.
* Verify that export permissions are correctly configured on the NetApp array.
* Unlike iSCSI and FC, NFS does not require multipathing configuration.
  {% endtab %}
  {% endtabs %}
