This article describes CPU and memory overcommitment feature in PMO.
PMO places virtual machines on available hosts so that each VM gets a fair share of available resources. Resource overcommitment is a technique by which multiple virtual machines share the same physical CPU and memory of the underlying hypervisor. PMO placement engine uses this technique to make the best use of your CPU and memory resources. It is important to consider this feature when planning for your compute infrastructure that will be part of your PMO cloud.
Benefits of overcommitment
On a KVM hypervisor, the linux scheduler controls the resources allocated to virtual machines. Even though a virtual machine is created with certain resource capacity, for most workloads it is not always needed. So to derive higher efficiency, KVM transparently shares physical resource capacity between virtual machines. This type of resource sharing can lead to better utilization. However, in times of contention a VM’s performance can be impacted.
Resource overcommitment in PMO
Following are the default overcommitment ratios in PMO:
- CPU: 16 Virtual CPUs (vCPUs) per physical CPU on hypervisor. Overcommitment ratio of 16:1
- Memory: 1.5 MB virtual memory for 1MB physical memory on the hypervisor. Overcommitment ratio of 1.5:1
PMO infrastructure view provides snapshot of resource allocations of your private cloud.
CPU is a much more fluid resource than memory (which represents state). So a much higher overcommitment is possible with CPU without significant overhead.
Resource overcommitment best practices
The default overcommitment values used by PMO are fit for average use case. They should provide good consolidation in most deployments. However, if you are concerned about performance of your workloads, a few best practices can help.
Swap configuration on hypervisor: Swap is important when running instances on KVM. Due to resource overcommitment, there can be a situation where memory demand from workloads and underlying Linux system exceeds physical memory available on host. Configuring swap ensures correct operation on the system. For example: consider a server with 48GB of ram. With its default overcommitment policy, OpenStack can provision virtual machines up to 1.5 times the memory size: 72GB total. In addition, lets assume 4GB of memory is needed for Linux OS to run properly. In this case the swap space needed is (72 - 48) + 4 = 28GB.
Multi-CPU virtual machines: Linux scheduler is very good at scheduling processes on available physical cores. With virtual machines however, the relationship between configured vCPUs of a single virtual machine and available physical CPUs impact performance. For example, if a virtual machine with 4 vCPUs is run on a host with just 2 physical cores, performance is severely impacted. When deploying OpenStack, consider the base CPU model used in all hypervisor machines and restrict the flavors in OpenStack environment accordingly. e.g. If base CPU model has 4 cores in datacenter, consider disallowing use of OpenStack flavors with greater than 4 virtual CPUs. Another good measure is use of host aggregates. Tagging flavors and host aggregates with matching tags will ensure that OpenStack virtual machines are run on right hypervisor.
The PMO Infrastructure view also provides information on actual physical resource utilization across all of your hypervisors. Possible overuse can be identified by monitoring the resource utilization statistics here.
Was this article helpful?
Thank you for the feedback.