NetApp Storage Configurations
Overview
NetApp ONTAP arrays are enterprise-grade storage systems that integrate with Private Cloud Director through the NetApp storage driver. They support iSCSI, Fibre Channel (FC), and NFS connectivity protocols to accommodate varied performance and deployment needs.
NetApp ONTAP arrays feature thin provisioning, deduplication, and compression. Deduplication and compression are enabled by default and require no special configuration in the storage backend or volume type.
Select a protocol below to view the complete configuration steps for your environment.
NetApp ONTAP iSCSI provides block storage connectivity for enterprise workloads using standard Ethernet infrastructure.
Prerequisites
Before you configure the iSCSI backend, complete these requirements:
Verify that the hypervisor operating system is not SAN-booted.
Configure iSCSI connectivity between all hosts and the NetApp array.
Configure a Storage Virtual Machine (SVM) with the iSCSI protocol enabled.
Create a dedicated NetApp volume (vvol) on the SVM aggregate for the OpenStack storage pool. Size the volume at 1 TiB or larger, depending on your use case.
Obtain vsadmin or equivalent credentials with edit access on the SVM.
Grant LUN provisioning permissions to the specified user.
Retrieve the SVM SSL certificate
iSCSI requires HTTPS connectivity to the NetApp array. Run the following command on each storage node to export the SVM's public SSL certificate:
openssl s_client -showcerts -connect <SVM_HOSTNAME>:443 </dev/null 2>/dev/null | openssl x509 -outform PEM > /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
chown pf9:pf9group /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pem
chmod 644 /opt/pf9/etc/pf9-cindervolume-base/conf.d/netapp_cert.pemReplace <SVM_HOSTNAME> with your SVM management LIF hostname or IP address.
Configure multipathing
Install the required multipathing packages on all hypervisors and storage nodes:
Create or update /etc/multipath.conf with the following configuration. Exclude your local disk from multipathing to prevent conflicts:
To find the WWID of your local disk, run:
Restart the multipath service:
Configure LVM filtering
LVM filtering prevents the host agent from hanging while running LVM commands. By default, LVM scans all devices, including multipath devices. Configure LVM to scan only local disks.
Add the following lines to /etc/lvm/lvm.conf, replacing /dev/sda with your local disk device:
Update the initramfs and reboot the host:
Configure compute for multipathing
Add the following line to /opt/pf9/etc/nova/conf.d/nova_override.conf and restart pf9-ostackhost:
Configure pool name filtering
By default, the NetApp storage driver selects all volumes under the SVM as storage pools and creates LUNs on any volume in that SVM. If your SVM contains volumes used by other workloads, such as Kubernetes persistent volumes, this can cause data corruption.
Always specify netapp_pool_name_search_pattern to restrict storage to dedicated NetApp volumes. This parameter accepts a regular expression to match one or more volume names.
To verify which NetApp volumes belong to your storage pool, run:
Volume backend configuration
Optional parameters
Enable image-volume caching when you use a storage backend for image storage. Caching stores converted images locally on each storage host, which reduces image transfer time for subsequent volume creation requests. This is recommended for environments with a single images node and multiple storage nodes.
Special considerations
SnapMirror replication can be used for disaster recovery.
NetApp ONTAP Fibre Channel provides high-performance block storage connectivity for mission-critical workloads.
Prerequisites
Before you configure the Fibre Channel backend, complete these requirements:
Verify that the hypervisor operating system is not SAN-booted.
Install Fibre Channel HBAs on all hypervisors and block storage hosts.
Configure FC zoning between hosts and NetApp storage on your FC switches.
Configure a Storage Virtual Machine (SVM) with the FC protocol enabled.
Create a dedicated NetApp volume (vvol) on the SVM aggregate for the OpenStack storage pool. Size the volume at 1 TiB or large,r depending on your use case.
Obtain vsadmin or equivalent credentials with edit access on the SVM.
Retrieve the SVM SSL certificate
Fibre Channel requires HTTPS connectivity to the NetApp array. Run the following command on each storage node to export the SVM's public SSL certificate:
Replace <SVM_HOSTNAME> with your SVM management LIF hostname or IP address.
Configure multipathing
Install the required multipathing packages on all hypervisors and storage nodes:
Create or update /etc/multipath.conf with the following configuration. Exclude your local disk from multipathing to prevent conflicts:
To find the WWID of your local disk, run:
Restart the multipath service:
Configure LVM filtering
LVM filtering prevents the host agent from hanging while running LVM commands. By default, LVM scans all devices, including multipath devices. Configure LVM to scan only local disks.
Add the following lines to /etc/lvm/lvm.conf, replacing /dev/sda with your local disk device:
Update the initramfs and reboot the host:
Configure compute for multipathing
Add the following line to /opt/pf9/etc/nova/conf.d/nova_override.conf and restart pf9-ostackhost:
Configure pool name filtering
By default, the NetApp storage driver selects all volumes under the SVM as storage pools and creates LUNs on any volume in that SVM. If your SVM contains volumes used by other workloads, such as Kubernetes persistent volumes, this can cause data corruption.
Always specify netapp_pool_name_search_pattern to restrict storage to dedicated NetApp volumes. This parameter accepts a regular expression to match one or more volume names.
To verify which NetApp volumes belong to your storage pool, run:
Volume backend configuration
Optional parameters
Enable image-volume caching when you use a storage backend for image storage. Caching stores converted images locally on each storage host, which reduces image transfer time for subsequent volume creation requests. This is recommended for environments with a single images node and multiple storage nodes.
Special considerations
The minimum supported ONTAP version is 9.1 or later.
NVMe/FC requires additional configuration not covered in this guide.
NetApp ONTAP NFS provides file-based storage connectivity using standard network infrastructure.
Prerequisites
Before you configure the NFS backend, complete these requirements:
Configure a Storage Virtual Machine (SVM) with the NFS protocol enabled.
Obtain vsadmin or equivalent credentials with edit access on the SVM.
Configure NFS exports on the NetApp SVM.
Install
nfs-commonon all hosts.Verify connectivity to NFS data LIFs.
Retrieve the SVM SSL certificate
NFS requires HTTPS connectivity to the NetApp array. Run the following command on each storage node to export the SVM's public SSL certificate:
Replace <SVM_HOSTNAME> with your SVM management LIF hostname or IP address.
Configure pool name filtering
By default, the NetApp storage driver selects all volumes under the SVM as storage pools and creates LUNs on any volume in that SVM. If your SVM contains volumes used by other workloads, such as Kubernetes persistent volumes, this can cause data corruption.
Always specify netapp_pool_name_search_pattern to restrict storage to dedicated NetApp volumes. This parameter accepts a regular expression to match one or more volume names.
To verify which NetApp volumes belong to your storage pool, run:
Volume backend configuration
NFS shares configuration
Create /opt/pf9/etc/pf9-cindervolume-base/nfs_shares.conf with your NFS export paths:
Optional parameters
Enable image-volume caching when you use a storage backend for image storage. Caching stores converted images locally on each storage host, which reduces image transfer time for subsequent volume creation requests. This is recommended for environments with a single images node and multiple storage nodes.
Special considerations
The NFS driver supports NFS v3 and v4.
FlexClone is used for space-efficient cloning.
Verify that export permissions are correctly configured on the NetApp array.
Unlike iSCSI and FC, NFS does not require multipathing configuration.
Last updated
Was this helpful?
