System Requirements for Linux-KVM

Before you start using Platform9 Managed OpenStack (PMO), you must configure your physical resources so they can work with the PMO setup.

This document describes the systems pre-requisites for creating a minimal PMO deployment or for configuring PMO for production.

IMPORTANT: We recommend reading through and understanding the entire document, irrespective of whether you are creating a minimal PMO deployment for a trial/POC, or a production deployment.

Minimal Configuration (POC/Trial)

Following is a minimal configuration to enable a PMO POC/Trial deployment.

  • 1 to 3 physical servers running Linux RHEL/CentOS/Ubuntu
  • Each server configured with sufficient local or shared storage to host virtual machines that will run on that server
  • 1 server configured with sufficient additional storage to host 1 or more images used for virtual machine provisioning
  • Each server having at least 1 network interface (configured with either flat or VLAN networking) and outbound HTTPS access


  • 1-3 physical servers. We recommend starting with 3 servers, but you can start with 1.
  • Each server configured with:
    • CPU: Minimum 8 physical cores, recommended is 2 physical sockets with 8 cores each
    • RAM: Minimum 16GB, recommended RAM is 32GB
    • Storage: Minimum 100GB, recommended RAM is 1TB if virtual machines are to be run locally by utilizing host storage
    • One of the following Linux operating system distributions:


Configure your servers with at least some local storage to get started with Platform9. PMO can work with following storage options for storing virtual machines and images:

  • Local storage: Each hypervisor is configured with local storage
  • NFS Shared storage: Each hypervisor is configured with NFS shared storage. Refer to How to Configure NFS Shared Storage to ensure appropriate NFS configuration.
  • Block storage via integration with your storage provider using OpenStack Cinder


PMO Networking enables the use of Software Defined Networks (SDN) within PMO enabling an administrator to define complex, virtual network topologies using VLAN / overlay networking (GRE/VXLAN), and isolated L3 domains within the OpenStack environment.

A typical Neutron-enabled environment requires either having one or more hosts configured as the network nodes, or having Distributed Virtual Routing (DVR)configured across all your hypervisors. In a configuration that uses dedicated network nodes, the node serves as the egress point for north-south traffic for the cloud. The node also provides layer 3 routing between tenant networks created in PMO. For more details re this refer to Configuring PMO Networking Settings.

Regardless of your configuration, all your network nodes and/or hypervisors must have the following networking configuration:

  • At least one physical network interface (two interfaces in a bond is recommended).
  • Outbound HTTPS (port 443) access (to communicate with Platform9 configuration plane).
  • The following VLAN sub-interfaces on the physical interface/bond:

    1. VLAN-based virtual machine traffic: This interface will be used to route traffic for the VLAN based private/tenant networks as well as provider networks created via Neutron. Therefore, it must be trunked for all VLAN IDs that you plan to supply to Neutron for tenant/private and provider networking.
    2. Management VLAN: Ensure this VLAN allows outbound HTTPS access for the Platform9 host agent to communicate with the controller.
    3. Tunneled GRE / VXLAN VLAN (optional): This interface will be used to route traffic for the VXLAN or GRE based private /tenant networks created via Neutron. Therefore, it must have IP level connectivity with other hosts through the interface IP.
    4. External VLAN (optional for hypervisors): This interface will be used to route all outbound traffic for all instances that are assigned a floating IP address.
    5. Storage VLAN (optional): This interface would be used for any iSCSI, NFS, etc storage traffic for instances, or block storage.

Production Configuration

Once you move past the proof-of-concept (POC) phase and are ready to deploy PMO for your production environment, we recommend that you follow the steps below.

IMPORTANT: We highly recommend reading through and understand both this section and the section above irrespective of whether you are creating a minimal setup of PMO or working on a production PMO deployment.

Whitelisting access to PMO

By default, your PMO account url endpoint is accessible via public internet. For production environments, we recommend that you restrict access such that it is only accessible from a limited range of IP addresses that belong to your organization. If you’d like to enable this additional filtering, please contact Platform9 Support and provide us with the range of IP addresses you’d like to limit access from.

Compute and Memory

Compute hosts should meet at least the following requirements to operate Platform9.

  • CPU: AMD / Intel Server Class Processors
  • Memory: 16GB RAM
  • Network: (2) 1Gbps bonded NICs (LACP)
  • Boot disk: 20GB
  • File system: Ext3, Ext4

Disks used for virtual machine storage are supported with the following file systems: * CephFS * Ext3 * Ext4 * GFS * GFS2 * GlusterFS * NFS * XFS The amount of space required for virtual machine storage will vary widely between organizations depending on the size of VMs created, over-provisioning ratios, and actual disk utilization per-VM. Thus, Platform9 makes no recommendation on this value.

Resource Over-provisioning

PMO places virtual machines on available hosts so that each VM gets a fair share of available resources. Resource overcommitment is a technique by which virtual machines share the same physical CPU and memory. PMO placements use this technique to reap cost savings. It is important to consider this feature of PMO when planning for your compute infrastructure that will be part of your PMO cloud deployment.

Refer to PMO Resource Overcommitment Best Practices for an understanding of this feature.


The minimum hardware requirements for the network node are the same as recommended for the compute nodes. Additionally, the network node should have 1 CPU core per Gbps of L3 routed tenant traffic.

Sample Network Configuration

The following image is a sample configuration for your reference, with both network & compute nodes utilizing bonded NICs connecting to two redundant, virtually clustered upstream switches.

Redundant Neutron Deployment

Additional resources

The enhanced functionality in Neutron networking introduces greater complexity. Please refer to the following articles to learn more about Neutron networking, and how to configure it within your environment.

Block Storage (Cinder)

Block storage is made available in OpenStack via the Cinder service. Many backend storage arrays are supported via Cinder. To ensure your array is supported check the Cinder Compatibility Matrix

Sample Cinder Configuration

The following image is a sample configuration for your reference, with both Cinder Block Storage Roles & Cinder Backends utilizing bonded NICs connecting to two redundant, virtually clustered upstream switches. Cinder High Availablity

By having multiple Cinder Block Storage Roles for each Cinder Backend, and multiple Cinder Backends, you will have ultimate redundancy with zero single points of failure.