Platform9 Managed OpenStack Prerequisites for Linux KVM
OpenStack facilitates the running of multiple applications on virtual machines that operate on top of underlying physical resources such the hardware, the physical network, and the block storage.
Before you start using Platform9 Managed OpenStack, you must configure the physical resources so that they can work with Platform9 Managed OpenStack.
The following general prerequisites must be met before you can start using Platform9 Managed OpenStack.
Minimal Configuration (POC/Trial)
Following is a minimal configuration to enable a Platform9 Managed OpenStack POC/Trial deployment. We recommend reading the rest of this page even when creating a minimal setup.
- 1 to 3 physical servers running Linux RHEL/CentOS/Ubuntu
- Each server configured with sufficient local or shared storage to host virtual machines that will run on that server
- 1 server configured with sufficient additional storage to host 1 or more images used for virtual machine provisioning
- Each server having at least 1 network interface (configured with either flat or VLAN networking) and outbound HTTPS access
- 1-3 physical servers. We recommend starting with 3 servers, but you can start with 1.
- Each server configured with:
- CPU: Minimum 8 physical cores, recommended is 2 physical sockets with 8 cores each
- RAM: Minimum 16GB, recommended RAM is 32GB
- Storage: Minimum 100GB, recommended RAM is 1TB if virtual machines are to be run locally by utilizing host storage
- One of the following Linux operating system distributions:
Configure your servers with at least some local storage to get started with Platform9. Platform9 Managed OpenStack can work with following storage options for storing virtual machines and images:
- Local storage: Each hypervisor is configured with local storage
- NFS Shared storage: Each hypervisor is configured with NFS shared storage. Refer to How to Configure NFS Shared Storage with Platform9 to ensure appropriate NFS configuration.
- Block storage via integration with your storage provider using OpenStack Cinder
Depending on your Neutron networking configuration, you might either designate one or more servers as dedicated network nodes, or distribute networking capabilities across all your servers (more details on this in Configuring Neutron Settings.
Regardless of your configuration, all your network nodes and/or hypervisors must have the following networking configuration.
- At least one physical network interface (two interfaces in a bond is recommended).
- Outbound HTTPS (port 443) access (to communicate with Platform9 configuration plane).
The following VLAN sub-interfaces on the physical interface/bond:
- VLAN-based virtual machine traffic: This interface will be used to route traffic for the VLAN based private/tenant networks as well as provider networks created via Neutron. Therefore, it must be trunked for all VLAN IDs that you plan to supply to Neutron for tenant/private and provider networking.
- Management VLAN: Ensure this VLAN allows outbound HTTPS access for the Platform9 host agent to communicate with the controller.
- Tunneled GRE / VXLAN VLAN (optional): This interface will be used to route traffic for the VXLAN or GRE based private /tenant networks created via Neutron. Therefore, it must have IP level connectivity with other hosts through the interface IP.
- External VLAN (optional for hypervisors): This interface will be used to route all outbound traffic for all instances that are assigned a floating IP address.
- Storage VLAN (optional): This interface would be used for any iSCSI, NFS, etc storage traffic for instances, or block storage.
Refer to one of the following articles to configure the required prerequisites, based on the Linux operating system distribution you choose to work on.