Neutron prerequisites for Ubuntu

This OpenStack tutorial describes prerequisites to prepare your servers to leverage OpenStack Neutron with Ubuntu.

For a general description of Neutron networking concepts, refer to the tutorial Networking with OpenStack Neutron Basic Concepts.

Refer to Platform9 Managed OpenStack prerequisites for Linux/KVM for hardware requirements for Platform9 Managed OpenStack.

 Prepare Your Linux/KVM Physical Servers for Neutron

The following image represents three hypervisors connected in a Managed OpenStack Neutron network.

Figure 1. Neutron Network Configuration Example

In order to run a Managed OpenStack Neutron network, each of your physical hypervisors and the Neutron network nodes must be prepared with following steps.

Step 1: Install, enable, and start the NTP Daemon.

The NTP daemon is required for all components to have their time synchronized.

Run the following commands to install, enable and start the NTP daemon.

apt-get install -y ntp
service ntp start

Step 2: Install required packages and load the modules needed for Neutron

Run the following commands to load the modules needed for Neutron.

apt-get update
apt-get install -y dnsmasq arping conntrack ifenslave vlan software-properties-common

modprobe bridge
modprobe br_netfilter
modprobe 8021q
modprobe bonding

echo bridge > /etc/modules-load.d/pf9.conf
echo 8021q >> /etc/modules-load.d/pf9.conf
echo bonding >> /etc/modules-load.d/pf9.conf
echo br_netfilter >> /etc/modules-load.d/pf9.conf

Step 3: Add sysctl options

Run the following commands to add sysctl options.

echo net.ipv4.conf.all.rp_filter=0 >> /etc/sysctl.conf
echo net.ipv4.conf.default.rp_filter=0 >> /etc/sysctl.conf
echo net.bridge.bridge-nf-call-iptables=1 >> /etc/sysctl.conf
echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf
echo net.ipv4.tcp_mtu_probing=2 >> /etc/sysctl.conf
sysctl -p

Step 4: Add the Platform9 APT Repo

Run the following command to install the Platform9 APT repository.

add-apt-repository 'deb http://platform9-neutron.s3-website-us-west-1.amazonaws.com ubuntu/'

Step 5: Install Open vSwitch

Run the following command to install Open vSwitch.

# Ubuntu 14.04
apt-get -y --force-yes install openvswitch-switch

# Ubuntu 16.04 and higher
apt-get -y install openvswitch-switch

Step 6: Enable and start Open vSwitch

Run the following commands to enable and start Open vSwitch.

# Ubuntu 14.04
update-rc.d openvswitch-switch defaults
service openvswitch-switch start

# Ubuntu 16.04 and higher
systemctl enable openvswitch-switch.service
systemctl start openvswitch-switch.service

Step 7: Configure physical interfaces

Figure 1 in the article represents a sample Neutron network configuration. Steps 7 through 11 are based on the configuration shown in Figure 1 from the article. Steps 7 through 11 describe the configuration of physical interfaces into a Linux bond, addition of VLAN interfaces for management, VXLAN/GRE network traffic and storage.

You may or may not require one or more of these steps. The steps to follow would be based on your Neutron network configuration. For instance, if you do not plan on using VXLAN/GRE, you can skip the step to set up VXLAN/GRE tunneling interface.

The article assumes the interface names to be eth0 and eth1. Substitute eth0 and eth1 with the appropriate interface names per your configuration when you run the commands given for this step.
Similarly, the article assumes an MTU of 9000 throughout the data center (VXLAN requires an MTU of at least 1600). Ensure all physical switches are configured to handle the MTU configured on your servers.

Add the following to /etc/network/interfaces

auto eth0
iface eth0 inet manual
  bond-master bond0

auto eth1
iface eth1 inet manual
  bond-master bond0

auto bond0
iface bond0 inet manual
  bond-mode 802.3ad
  bond-lacp-rate 1
  bond-slaves eth0 eth1

Step 8: Set up the Bond interface

Add the following to /etc/network/interfaces to create the bond:

auto bond0
iface bond0 inet manual
  bond-mode 802.3ad
  bond-slaves eth0 eth1

 Step 9: Setup the Management interface

auto bond0.101
iface bond0.101 inet static
  address 192.0.2.10
  netmask 255.255.255.0
  gateway 192.0.2.1
  dns-nameservers 192.0.2.100 192.0.2.200
  dns-search pf9.example

Step 10: Setup the VXLAN/GRE tunneling interface (Optional)

GRE and VXLAN require 24 bytes and 50 bytes of overhead, respectively. Platform9 recommends at least a minimum MTU of 1600 to accommodate this overhead, with a 9000 byte MTU preferred.

Run the following commands to VXLAN/GRE tunneling interface.

auto bond0.102
iface bond0.102 inet static
  address 198.51.100.10
  netmask 255.255.255.0
  post-up ifconfig bond0 mtu 9000
  post-up ifconfig bond0.102 mtu 9000

 Step 11: Setup the Storage interface (Optional)

Run the following commands to set up the storage interface.

auto bond0.103
iface bond0.103 inet manual
  address 203.0.113.10
  netmask 255.255.255.0
  post-up ifconfig bond0 mtu 9000
  post-up ifconfig bond0.103 mtu 9000

Step 12: Restart Networking

Run the following command to restart the network service.

ifdown -a
ifup -a
ifup -a --allow=ovs

Step 13: Create OVS Bridges

The number of OVS bridges you need will depend on how many physical networks your hosts connect to, and what types of networks you will be creating.

Let us look at some basic networking terminology before creating the bridges.

When configuring Platform9 OpenStack’s Networking Config, each physical network is given a Label as a name, and that label mapped to a particular OVS bridge on the host during host authorization.

Let us look at two different examples of common host networking setups.

Example 1: Non-DVR setup with one external flat network, and trunk port for VLAN traffic

The following figure represents a non-DVR network setup with an external flat network, and a trunk port for VLAN traffic.

legacy1extnet
Figure 2. Non-DVR Network Setup

In Figure 2 above, the network has a trunk port consisting of eth0 and eth1 in a bond that will carry our VLAN-based networks (tenant, provider), as well as a dedicated port (eth2) that connects to a separate external network. This is a legacy, non-DVR setup where external connectivity and L3 capability is only on network nodes. Nodes that are hypervisors only carry the tenant network traffic, and need just 1 OVS bridge.

Run the following commands to add OVS bridges on the hypervisors. The steps below assume eth0/eth1 have already been configured in a Linux bond called "bond0". Please refer to steps 7-11 to set up your physical interfaces.

ovs-vsctl add-br br-vlan
ovs-vsctl add-port br-vlan bond0

On our network node, we have a separate NIC that connects to a different physical network. For this, we need a separate OVS bridge.

Run the following commands to add an OVS bridge.

ovs-vsctl add-br br-ext
ovs-vsctl add-port br-ext eth2

Example 2: DVR setup with a pair of NICs in a bond

The following figure represents a DVR network setup with a pair of NICs in a bond.

legacy1extnet-2
Figure 3. DVR Network Setup

In the DVR setup seen in figure 3 above, every host has external L3 connectivity. Here, we only have a pair of NICs in a bond. Therefore this OVS Bridge can only support one Flat (untagged) network, and as many VLAN-tagged networks as your networking infrastructure allows. There are multiple external networks that are VLAN-based, in addition to our tenant networks.

Run the following commands to add an OVS bridge on all hosts.

ovs-vsctl add-br br-vlan
ovs-vsctl add-port br-vlan bond0

April 27, 2017