Resource Overcommitment

This article describes CPU, memory and disk overcommitment feature in PMO.

PMO places virtual machines on available hosts so that each VM gets a fair share of available resources. Resource overcommitment is a technique by which multiple virtual machines share the same physical CPU, memory and disk of the underlying hypervisor. PMO placement engine uses this technique to make the best use of your compute, memory and storage resources. It is important to consider this feature when planning for your compute infrastructure that will be part of your PMO cloud.

Benefits of overcommitment

On a KVM hypervisor, the linux scheduler controls the resources allocated to virtual machines. Even though a virtual machine is created with certain resource capacity, for most workloads it is not always needed. So to derive higher efficiency, KVM transparently shares physical resource capacity between virtual machines. This type of resource sharing can lead to better utilization. However, in times of contention, a VM’s performance can be impacted.

Resource overcommitment in PMO

Following are the default overcommitment ratios in PMO:

  • CPU: 16 Virtual CPUs (vCPUs) per physical CPU on hypervisor. Overcommitment ratio of 16:1
  • Memory: 1.5 MB virtual memory for 1MB physical memory on the hypervisor. Overcommitment ratio of 1.5:1
  • Disk: 9999.0 GB virtual disk for 1GB physical disk available to the hypervisor. Overcommitment ratio of 9999:1

PMO infrastructure view provides snapshot of resource allocations of your private cloud. Each of these allocation ratios can be updated per hypervisor node from the Infrastructure view. A ratio greater than 1.0 will result in over-subscription of the respective available physical resource. It needs to be set to a valid positive integer or float value.

CPU is a much more fluid resource than memory (which represents state). So a much higher overcommitment is possible with CPU without significant overhead. Since disk filter was not enabled in placement scheduler initially, the default disk allocation ratio is set to a high value to be backwards compatible. A ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances.

Resource overcommitment best practices

The default overcommitment values used by PMO are fit for average use case. They should provide good consolidation in most deployments. However, if you are concerned about performance of your workloads, a few best practices can help.

  1. Swap configuration on hypervisor: Swap is important when running instances on KVM. Due to resource overcommitment, there can be a situation where memory demand from workloads and underlying Linux system exceeds physical memory available on host. Configuring swap ensures correct operation on the system. For example: consider a server with 48GB of ram. With its default overcommitment policy, OpenStack can provision virtual machines up to 1.5 times the memory size: 72GB total. In addition, lets assume 4GB of memory is needed for Linux OS to run properly. In this case the swap space needed is (72 - 48) + 4 = 28GB.

  2. Multi-CPU virtual machines: Linux scheduler is very good at scheduling processes on available physical cores. With virtual machines however, the relationship between configured vCPUs of a single virtual machine and available physical CPUs impact performance. For example, if a virtual machine with 4 vCPUs is run on a host with just 2 physical cores, performance is severely impacted. When deploying OpenStack, consider the base CPU model used in all hypervisor machines and restrict the flavors in OpenStack environment accordingly. e.g. If base CPU model has 4 cores in datacenter, consider disallowing use of OpenStack flavors with greater than 4 virtual CPUs. Another good measure is use of host aggregates. Tagging flavors and host aggregates with matching tags will ensure that OpenStack virtual machines are run on right hypervisor.

The PMO Infrastructure view also provides information on actual physical resource utilization across all of your hypervisors. Possible overuse can be identified by monitoring the resource utilization statistics here.