This tutorial dives deeper into OpenStack Cinder block storage. Please read the Platform9 tutorial on Cinder Block Storage Integration for Cinder setup with Platform9 Managed OpenStack. Also refer to OpenStack Storage Options and Use Cases for an overall understanding of OpenStack Storage options.
Cinder is the persistent block storage component of OpenStack. It's provided as a standalone OpenStack Service, along with other core OpenStack services such as Nova, Glance, Keystone, Neutron, etc.. Cinder is designed with a pluggable architecture, allowing for easy integration with a number of third-party storage backends.
OpenStack Cinder Deployment Options
Using Linux LVM
In many OpenStack deployments, all Cinder services, except the Cinder volume service, reside on the Controller nodes. In these deployments, commodity servers are used as dedicated Cinder volume nodes, with Cinder volume services running on them, as depicted below. In this configuration, a Cinder volume node functions as low-cost, no-frills storage array that serves simple block volumes to cloud instances, using the Logical Volume Manager (LVM) included in all Linux distributions to manage locally attached storage.
While this will work for some use cases, it has limitations that make it less than desirable in many enterprise use cases:
- Scaling issues - While deploying a commodity Cinder solution is likely the cheaper option when capacity requirements are low, it becomes less cost effective in large capacity use cases. For example, a typical Cinder volume node in an OpenStack deployment might be a Dell R720 server with 8 internal drives.Using 600 GB 15K SAS drives would yield ~2.3 TBs with RAID 10 and ~5.4 TBs with RAID 5 and less, obviously, with SSD. I’ve talked to users that required 12 TBs of Cinder storage and needed it to be on 600 GB SAS drives in a RAID 10 configuration; using just commodity servers would mean implementing a solution with 5 Cinder volume nodes.
- Lack of redundancy - Another limitation in a commodity Cinder solution is the lack of redundancy. While both nodes can be managed by the same Cloud controllers, the volume nodes are in fact independent “storage arrays” that do not share data between each other. Effectively, this means that if a Cinder volume node fails, all volumes exported by that node, as well as the data on it, become unavailable.
Using an Enterprise Storage Solution
Enterprise storage solutions provide several advantages over a simple LVM based Cinder storage strategy.
This type of storage generally has greater capacity than a commodity Cinder volume node solution without creating silos of independent nodes.
It also provides more redundancy than is available with a commodity Cinder server. For example, many modern storage arrays have redundant controllers with mirrored write cache.
Cinder with LVM also lacks much of the advanced functionality that is found in most enterprise storage solutions, such as compression, de-duplication, thin-provisioning, and QoS. There are enterprise applications, such as database, where these capabilities are critical.
For these reasons, most users are looking to enterprise storage vendors, such as SolidFire, to provide Cinder storage solutions that are suitable for their growing production cloud workloads.
Platform9 Implementation Of Cinder
Platform9 offer a unique approach to deploying OpenStack Cinder by hosting most of the Cinder services off-premises with other OpenStack control services where they are fully managed by Platform9. The Cinder volume service, however, runs on a server in a customer’s on-premises environment.
A customer chooses an on-premises server to assume the Cinder Volume Node role which communicates with the Cinder services running on the off-premises controllers. The Cinder Volume Node can, as in other OpenStack solutions, export volumes created on local disks or act as a proxy for communicating with an enterprise storage array, such as SolidFire or NetApp.
Typically, configuration of storage arrays for use as Cinder block storage is a manual process requiring the use of the command line. To simplify this process for our customers, we’ve enabled the process of setting up arrays, such as SolidFire, to be performed using the Platform9 dashboard as well as the command line.
Once a Platform9 array is configured, Cinder volumes can be created by cloud administrators and made available to end-users for attaching to their cloud instances. Theses tasks can also be performed by users with the command line or using the Platform9 dashboard.
Platform9 also enables Cinder users to create volume type tags that map to specific capabilities exposed by various storage arrays. For example, a Platform9 user with an array that has different drive types can create SSD backed and SAS backed Cinder volumes that can be matched to the correct workload type. SolidFire customers can create Cinder volumes with different QoS setting and use Platform9 to assign those volumes to the appropriate cloud instances based on specific volume type tags.
VMware vCenter Storage Policies with Cinder
Platform9 customer using vSphere as their hypervisor solution can take advantage of Platform9’s integration with VMware’s Storage Policy Based Management (SPBM). This feature enable Platform9 users to create Cinder volumes that map to datastores which adhere to specific storage types, as defined in vCenter. To learn how to integrate Platform9 with SPBM, read the tutorial on SPBM Support for Cinder in Platform9 VMware.
To learn more about Cinder with Platform9 and vSphere, please read the tutorial on Cinder Support in Platform9 VMware.
Please read the blog post on Integrating SolidFire Block Storage With OpenStack Cinder: Deep Dive for more details on how Platform9 integrates with Cinder. To learn how to integrate Platform9 with LVM to create a low-cost, no-frills Cinder option, read the tutorial on Cinder Integration with LVM. To learn how to integrate Platform9 with SolidFire, read the tutorial on Cinder Integration with SolidFire. To learn how to integrate Platform9 with NetApp, read the tutorial on Cinder Integration with NetApp.