Load Balancer as a Service (LBaaS)

Private Cloud Director implements Load Balancer as a Service (LBaaS) using Octavia with OVN (Open Virtual Network) as the provider driver. This implementation offers a lightweight and efficient load-balancing solution without the overhead of traditional virtual machine-based approaches.

Private Cloud Director currently uses the open source OVN provider driver for LBaaS instead of the default open source Amphora driver. OVN implements load balancing directly within the OVN distributed router using OpenFlow rules programmed into the Open vSwitches (OVS), eliminating the need for dedicated load balancer virtual machines.

Prerequisites

Before implementing LBaaS, please make sure the LBaaS Prerequisites are met.

Why OVN Provider Driver

The choice of OVN as the provider driver for LBaaS in the Private Cloud Director offers several advantages:

  1. Resource Efficiency

    1. No dedicated virtual machines required for load balancing

  2. Faster Deployment

    1. Near-instant load balancer creation

    2. No VM provisioning or boot time

  3. Simplified Management

    1. No separate management network required

    2. Integrated with existing OVN infrastructure

Supported LBaaS Configuration

Private Cloud Director currently supports following configuration options for LBaaS:

  1. Protocol Support

    1. Supports TCP, UDP, and SCTP protocols

    2. No Layer-7 (HTTP) load balancing support

    3. 1:1 protocol mapping between listeners and pools required

  2. Load Balancing Algorithm

    1. Only SOURCE_IP_PORT algorithm supported

    2. ROUND_ROBIN and LEAST_CONNECTIONS algorithms not supported currently (This is a limitation of the OVN provider driver)

  3. Health Monitoring

    1. Supports TCP and UDP-CONNECT protocols

    2. SCTP health monitoring is not currently supported

  4. IP Version Support

    1. Mixed IPv4 and IPv6 members not supported

    2. IPv6 support is not currently fully tested

Create a New Instance of Load Balancer

Private Cloud Director currently does not support creating or managing load balancer instances via UI. This capability is coming in a future release. In the meantime, you must create load balancer instances using PCD API or CLI. Below are the steps to configure LBaaS using the PCD OpenStack CLI.

circle-info

Info

Private Cloud Director currently does not support creating / managing load balancer instances via the UI. This capability will be available soon.

Create a Load Balancer

First, create a load balancer resource with a virtual IP (VIP) in the specified subnet.

  • The VIP is the single entry point for your load balancer

  • It must be created on a subnet where your load balancer will be accessible

  • This subnet should be the same tenant network where your backend servers (Virtual Machines) are deployed.

Create a Listener

Once the load balancer resource is set up, create the listener. A listener is the component that defines how your load balancer processes incoming requests:

  • It specifies the protocol (TCP, UDP, or SCTP) and port number

  • Acts as a front-end service that receives incoming traffic

  • Routes the traffic to the appropriate pool of backend servers (Virtual Machine)

  • Example: A TCP listener on port 80 for web traffic

Create a Pool

Now you can create a pool of virtual machines that will handle the client requests from the load balancer.

  • These VMs must be deployed and running before adding them to the pool

  • Every virtual machines in a given pool should provide the same service (e.g., web servers that are part of your application)

  • Pool members are identified by their IP address and port

You can also specify the load-balancing algorithm (e.g., SOURCE_IP_PORT) here and associates it with the listener.

Once the pool is created, you can add the member virtual machines that will receive the client requests. You will do this by providing each virtual machine's ip address, listening port, and subnet.

Configure Health Monitoring

Set up health monitoring to ensure that the load balancer periodically checks for the health of the pool of virtual machines. Unhealthy VMs will be skipped to avoid service disruption.

(Optional) Configure Public (Floating) IP

You can use the following commands to expose the load balancer for external access:

Create a public (floating) IP from the external network for the load balancer.

Retrieve the port ID of the virtual IP associated with the load balancer. This information is needed to link the public (floating) IP.

Then, associate the floating IP with the load balancer port, enabling public access.

Verification and Testing

Check load balancer status:

Check the health status of the virtual machines in the pool to ensure they can handle traffic.

Confirm that the load balancer is operational by sending a test request to the floating IP.

Last updated

Was this helpful?