System Requirements for Linux-KVM

Before you start using Platform9 Managed OpenStack (PMO), you must configure your physical resources so they can work with the PMO setup.

This document describes the systems pre-requisites for creating a minimal PMO deployment or for configuring PMO for production.

IMPORTANT: We recommend reading through and understanding the entire document, irrespective of whether you are creating a minimal PMO deployment for a trial/POC, or a production deployment.

Minimal Configuration (POC/Trial)

Following is a minimal configuration to enable a PMO POC/Trial deployment.

  • 1 to 3 physical servers running Linux RHEL/CentOS/Ubuntu
  • Each server configured with sufficient local or shared storage to host virtual machines that will run on that server
  • 1 server configured with sufficient additional storage to host 1 or more images used for virtual machine provisioning
  • Each server having at least 1 network interface (configured with either flat or VLAN networking) and outbound HTTPS access

Servers

  • 1-3 physical servers. We recommend starting with 3 servers, but you can start with 1.
  • Each server configured with:
    • CPU: Minimum 8 physical cores, recommended is 2 physical sockets with 8 cores each
    • RAM: Minimum 16GB, recommended RAM is 32GB
    • Storage: Minimum 100GB, recommended RAM is 1TB if virtual machines are to be run locally by utilizing host storage
    • One of the following Linux operating system distributions:

Storage

Configure your servers with at least some local storage to get started with Platform9. PMO can work with following storage options for storing virtual machines and images:

  • Local storage: Each hypervisor is configured with local storage
  • NFS Shared storage: Each hypervisor is configured with NFS shared storage. Refer to How to Configure NFS Shared Storage to ensure appropriate NFS configuration.
  • Block storage via integration with your storage provider using OpenStack Cinder

Networking

OpenStack Networking (Neutron) enables the use of Software Defined Networks (SDN) within OpenStack enabling an administrator to define complex, virtual network topologies using VLAN / overlay networking (GRE/VXLAN), and isolated L3 domains within the OpenStack environment.

A typical Neutron-enabled environment requires either having one more more nodes configured as the network nodes, or having Distributed Virtual Routing configured across all your hypervisors. In a configure that uses dedicated network nodes, the node serves as the egress point for north-south traffic transiting the cloud. The node also provides layer 3 routing between tenant networks created in OpenStack (more details on this in Configuring Neutron Settings.

Regardless of your configuration, all your network nodes and/or hypervisors must have the following networking configuration.

  • At least one physical network interface (two interfaces in a bond is recommended).
  • Outbound HTTPS (port 443) access (to communicate with Platform9 configuration plane).
  • The following VLAN sub-interfaces on the physical interface/bond:

    1. VLAN-based virtual machine traffic: This interface will be used to route traffic for the VLAN based private/tenant networks as well as provider networks created via Neutron. Therefore, it must be trunked for all VLAN IDs that you plan to supply to Neutron for tenant/private and provider networking.
    2. Management VLAN: Ensure this VLAN allows outbound HTTPS access for the Platform9 host agent to communicate with the controller.
    3. Tunneled GRE / VXLAN VLAN (optional): This interface will be used to route traffic for the VXLAN or GRE based private /tenant networks created via Neutron. Therefore, it must have IP level connectivity with other hosts through the interface IP.
    4. External VLAN (optional for hypervisors): This interface will be used to route all outbound traffic for all instances that are assigned a floating IP address.
    5. Storage VLAN (optional): This interface would be used for any iSCSI, NFS, etc storage traffic for instances, or block storage.

Production Configuration

Once you move past the proof-of-concept (POC) phase and are ready to deploy PMO for your production environment, we recommend that you follow the steps below.

IMPORTANT: We highly recommend reading through and understand both this section and the section above irrespective of whether you are creating a minimal setup of PMO or working on a production PMO deployment.

Whitelisting access to PMO

By default, your PMO account url endpoint is accessible via public internet. For production environments, we recommend that you restrict access such that it is only accessible from a limited range of IP addresses that belong to your organization. If you’d like to enable this additional filtering, please contact Platform9 Support and provide us with the range of IP addresses you’d like to limit access from.

Compute and Memory

Compute hosts should meet at least the following requirements to operate Platform9.

  • CPU: AMD / Intel Server Class Processors
  • Memory: 16GB RAM
  • Network: (2) 1Gbps bonded NICs (LACP)
  • Boot disk: 20GB
  • File system: Ext3, Ext4

Disks used for virtual machine storage are supported with the following file systems: * CephFS * Ext3 * Ext4 * GFS * GFS2 * GlusterFS * NFS * XFS The amount of space required for virtual machine storage will vary widely between organizations depending on the size of VMs created, over-provisioning ratios, and actual disk utilization per-VM. Thus, Platform9 makes no recommendation on this value.

Resource Over-provisioning

OpenStack places virtual machines on available hosts so that each VM gets a fair share of available resources. Resource overcommitment is a technique by which virtual machines share the same physical CPU and memory. OpenStack placements use this technique to reap cost savings. When planning the compute infrastructure to back the OpenStack deployment it is important to consider this feature of OpenStack.

Default Overcommit Ratios

  • CPU: 16 virtual CPUs per CPU core (16:1)
  • Memory: 150% of available memory (1.5:1)

Depending on the number of VMs you plan to run on PMO and the resource overcommitment you are comfortable with, appropriate number of hosts with the right CPU and memory should be used. For example, if you plan to deploy 50 VMs - each with 1vCPU and 2GB of memory - and want 5x CPU overcommitment, but no memory overcommit, together the hosts resources should total 10 CPU cores and 100GB of memory. If you plan to use multi-vCPU VMs, each host should have at least as many cores as the largest vCPUs used by any virtual machine. e.g. In a setup with 6-core hosts, running 24 vCPU instances is not supported.

Memory Oversubscription

When oversubscribing memory, it is important to ensure the Linux host has a sufficient amount of swap space allocated so that it never runs out of swap. For example, consider a server with 48GB of RAM. With the default overcommitment policy, OpenStack can provision virtual machines up to 1.5 times the memory size: 72GB total. In addition, lets assume 4GB of memory is needed for Linux OS to run properly. In this case the amount of swap space needed is (72 - 48) + 4 = 28GB. For in-depth discussion of these concepts, refer to Resource Overcommitment Best Practices

Networking

The minimum hardware requirements for the network node are the same as recommended for the compute nodes. Additionally, the network node should have 1 CPU core per Gbps of L3 routed tenant traffic.

Sample Network Configuration

The following image is a sample configuration for your reference, with both network & compute nodes utilizing bonded NICs connecting to two redundant, virtually clustered upstream switches.

Redundant Neutron Deployment

Additional resources

The enhanced functionality in Neutron networking introduces greater complexity. Please refer to the following articles to learn more about Neutron networking, and how to configure it within your environment.

Block Storage (Cinder)

Block storage is made available in OpenStack via the Cinder service. Many backend storage arrays are supported via Cinder. To ensure your array is supported check the Cinder Compatibility Matrix

Sample Cinder Configuration

The following image is a sample configuration for your reference, with both Cinder Block Storage Roles & Cinder Backends utilizing bonded NICs connecting to two redundant, virtually clustered upstream switches. Cinder High Availablity

By having multiple Cinder Block Storage Roles for each Cinder Backend, and multiple Cinder Backends, you will have ultimate redundancy with zero single points of failure.