This article describes how to configure GPUs and Accelerator cards passthrough and support for virtual GPUs in Platform9 Managed OpenStack (PMO)
GPUs and Accelerator cards installed on your PMO hosts can be passed through entirely or as virtual devices to virtual machines running in your PMO cloud. Any card that is capable of functioning as virtual GPU or passthrough or both via applicable drivers on the upstream OpenStack version ‘Rocky’ and later versions of OpenStack will function in PMO as well on supported hypervisor types.
GPUs and accelerator cards generally have multiple functions. GPU cards typically have ‘vga’ and ‘audio’ functions whereas an accelerator card has ‘user’ and ‘management’ functions. These functions are present under the same PCI bus and device on the hypervisor host. During passthrough unlike the host, the card functions are passed through as separate devices in the VM rather than separate functions on the same device. Since upstream OpenStack does not support passing them as functions on the same device as of now, the card installation utilities and programs should support normal operation for all of its functions which will be located on different PCI devices in the VM.
It is therefore recommended that the card vendor has tested and certified the card for VMs running on top of PMO before using passthrough on KVM or VMware based hypervisors. The card should be tested on both intel and AMD platforms although the configuration process within PMO remains the same for both.
Using the following general guidelines card OEMs can certify their cards on an upstream OpenStack version for PMO supported hypervisor types. To find the upstream OpenStack version corresponding to PMO please refer here. List of all Openstack upstream versions is available here.
PCI Passthrough on PMO
Following are general guidelines on setting PCI passthrough on both Intel and AMD CPU based hosts.
These guidelines are applicable to an Ubuntu 16.04 host with intel cpus. Make appropriate changes in the flags, commands before running on AMD cpu hosts.
Setup host not to use the GPU and turn on IOMMU. Edit /etc/default/grub and add to GRUB_CMDLINE_LINUX_DEFAULT following intel_iommu=on intel_iommu=pt rd.driver.pre=vfio-pci intel_iommu=on Enables VT-d in host intel_iommu=pt Use IOMMU only for devices that can be passthrough
Add any additional settings to disable certain features for the GPU video=vesafb:off,efifb:off disable the EFI/VESA framebuffer
Update the grub config sudo update-grub
Update /etc/modules to load vfio drivers at boot. Device functions use vfio driver on the host. vfio-pci vfio_iommu_type1
Setup the device(s) in the IOMMU groups to bind to vfio-pci rather than its default driver. Add following to /etc/modprobe.d/vfio.conf options vfio-pci ids=Vendor_ID:PRODID_VGA,VENDOR_ID:PRODID_AUDIO
Example: options vfio-pci ids=10de:0fc6,10de:0e1b
Some additional options may be desired for the GPUs. Example: options vfio-pci disable_vga=1
Apply the changes and reboot sudo update-initramfs -u -k sudo reboot
Now pass through the device as described in the upstream documentation here. Platform9 can assist with this.
Install the card’s drivers and utilities in the VM after it is booted.
Test the functionality of the card. Note that only passthrough will need platform9’s assistance. Rest all settings should be carried out with assistance from the card’s OEM User Guides. Additional settings on the hypervisor host may be required for specific types of cards.
The above configuration steps are applicable to the accelerator cards as well.
Virtual GPUs on PMO
Virtual GPU is an untested feature on PMO as of today. This feature is dependent on the hypervisor, its version and whether the vendor’s vGPU driver software is available for the type of hypervisor you are going to use. Please refer to the vendor’s GPU support matrix and user guides to check support for the hypervisor type that you are going to connect or already connected to PMO cloud.
Once the virtual GPU drivers are installed libvirt views virtual GPUs as mediated devices on the host. One can see the Openstack required properties by running ‘ls’ command on /sys/class/mdev_bus/*/mdev_supported_types. Further one can refer to the upstream Openstack guide to allocate vGPUs to the VMs.
Please note that customer will have to rely on PCI passthrough if card OEM does not support virtual GPU drivers for platform9 supported hypervisor types.
Thank you for your feedback! What did you like about this article?
Thank you for your feedback! How could this article be improved?
Thank you for your feedback!