Neutron prerequisites for CentOS

This OpenStack tutorial describes prerequisites to prepare your servers to leverage OpenStack Neutron with CentOS Linux/KVM.

For a general description of Neutron networking concepts, refer to this Tutorial: Networking with OpenStack Neutron Basic Concepts

Note: Refer to this Tutorial for hardware requirements for Platform9 Managed OpenStack.

 Prepare Your Linux/KVM Physical Servers for Neutron

The following image represents three hypervisors connected in a Managed OpenStack Neutron network.


Figure 1. Neutron Network Configuration Example

In order to run a Managed OpenStack Neutron network, each of your physical hypervisors and the Neutron network nodes must be prepared with following steps.

Note: There are no separate network nodes in a Distributed Virtual Routing (DVR) network.

Step 1: Install, Enable, & Start the NTP Daemon.

The NTP daemon is required for all components to have their time synchronized.

Run the following commands to install, enable and start the NTP daemon.

yum install -y ntp
systemctl enable ntpd
systemctl start ntpd

Step 2: Set SELinux to permissive

This is required for Open vSwitch (OVS) to be able to manage networking.

Run the following commands to set SELinux to permissive.

sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config
setenforce 0

Step 3: Disable Firewalld and NetworkManager

This is required for KVM and OVS to be able to create iptables rules directly without firewalld getting in the way.

Run the following commands to disable firewalld and NetworkManager.

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager

Step 4: Enable Network

Run the following command to enable network.

systemctl enable network

Step 5: Load the modules needed for Neutron

Run the following commands to load the modules needed for Neutron.

modprobe bridge
modprobe 8021q
modprobe bonding
modprobe tun
echo bridge > /etc/modules-load.d/pf9.conf
echo 8021q >> /etc/modules-load.d/pf9.conf
echo bonding >> /etc/modules-load.d/pf9.conf
echo tun >> /etc/modules-load.d/pf9.conf

Step 6: Add sysctl options

Run the following commands to add sysctl options.

echo net.ipv4.conf.all.rp_filter=0 >> /etc/sysctl.conf
echo net.ipv4.conf.default.rp_filter=0 >> /etc/sysctl.conf
echo net.bridge.bridge-nf-call-iptables=1 >> /etc/sysctl.conf
echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf
echo net.ipv4.tcp_mtu_probing=1 >> /etc/sysctl.conf
sysctl -p

Step 7: Add the Platform9 YUM Repo

Run the following command to install the Platform9 YUM repository.

yum -y install https://s3-us-west-1.amazonaws.com/platform9-neutron/noarch/platform9-neutron-repo-1-0.noarch.rpm

Step 8: Install Open vSwitch

Run the following command to install Open vSwitch.

yum -y install --disablerepo="*" --enablerepo="platform9-neutron-el7-repo" openvswitch

 Step 9: Enable and start Open vSwitch

Run the following commands to enable and start Open vSwitch.

systemctl enable openvswitch
systemctl start openvswitch

Step 10: Configure physical interfaces

Note: Figure 1 in the article represents a sample Neutron network configuration. Steps 10 through 14 are based on the configuration shown in Figure 1 from the article. Steps 10 through 14 describe the configuration of physical interfaces into a Linux bond, addition of VLAN interfaces for management, VXLAN/GRE network traffic and storage. You may or may not require one or more of these steps. The steps to follow would be based on your Neutron network configuration. For instance, if you do not plan on using VXLAN/GRE, you can skip the step to set up VXLAN/GRE tunneling interface.

We assume the interface names to be eth0 and eth1.
Substitute eth0 and eth1 with the appropriate interface names per your setup, when you run the commands given for this step.
Similarly, we are assuming an MTU of 9000 (VXLAN requires an MTU of at least 1600)
Make sure all physical switches are configured to handle the MTU of 1600 or you might have problems.

Run the following commands to configure physical interfaces.

echo DEVICE=eth0 > /etc/sysconfig/network-scripts/ifcfg-eth0
echo ONBOOT=yes >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo BOOTPROTO=none >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo MTU=9000 >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo MASTER=bond0 >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo SLAVE=yes >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo DEVICE=eth1 > /etc/sysconfig/network-scripts/ifcfg-eth1
echo ONBOOT=yes >> /etc/sysconfig/network-scripts/ifcfg-eth1
echo BOOTPROTO=none >> /etc/sysconfig/network-scripts/ifcfg-eth1
echo MTU=9000 >> /etc/sysconfig/network-scripts/ifcfg-eth1
echo MASTER=bond0 >> /etc/sysconfig/network-scripts/ifcfg-eth1
echo SLAVE=yes >> /etc/sysconfig/network-scripts/ifcfg-eth1

Step 11: Set up the Bond interface

Note: We are assuming bonding type=6 refer to Bonding Types to learn more.

Run the following commands to set up the bond interface.

echo DEVICE=bond0 > /etc/sysconfig/network-scripts/ifcfg-bond0
echo ONBOOT=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0
echo BONDING_MASTER=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0
echo 'BONDING_OPTS="mode=6"' >> /etc/sysconfig/network-scripts/ifcfg-bond0
echo MTU=9000 >> /etc/sysconfig/network-scripts/ifcfg-bond0

Step 12: Setup the Management interface

Note: We are assuming VLAN 101 for the Management network. Replace it with the correct VLAN ID per your setup.
We are assuming subnet 10.0.101.0/24 for Management. Replace it with the correct subnet per your setup.

echo DEVICE=bond0.101 > /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo ONBOOT=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo BOOTPROTO=none >> /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo TYPE=Vlan >> /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo VLAN=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo IPADDR=10.0.101.11 >> /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo NETMASK=255.255.255.0 >> /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo GATEWAY=10.0.101.1 >> /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo DNS1=10.0.0.5 >> /etc/sysconfig/network-scripts/ifcfg-bond0.101
echo DNS2=10.0.0.10 >> /etc/sysconfig/network-scripts/ifcfg-bond0.101

Step 13: Setup the VXLAN/GRE tunneling interface (Optional)

Note: We are assuming VLAN 102 for VXLAN/GRE tunneling. Please use the correct VLAN per your setup.
We are assuming subnet 10.0.102.0/24 for VXLAN/GRE tunneling. Please use the correct subnet per your setup.

Run the following commands to VXLAN/GRE tunneling interface.

echo DEVICE=bond0.102 > /etc/sysconfig/network-scripts/ifcfg-bond0.102
echo ONBOOT=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0.102
echo BOOTPROTO=none >> /etc/sysconfig/network-scripts/ifcfg-bond0.102
echo TYPE=Vlan >> /etc/sysconfig/network-scripts/ifcfg-bond0.102
echo VLAN=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0.102
echo IPADDR=10.0.102.11 >> /etc/sysconfig/network-scripts/ifcfg-bond0.102
echo NETMASK=255.255.255.0 >> /etc/sysconfig/network-scripts/ifcfg-bond0.102

 Step 14: Setup the Storage interface (Optional)

Note: We are assuming VLAN 103 for the storage network. Replace it with the VLAN per your setup.
We are assuming subnet 10.0.103.0/24 for the storage network. Replace it with the subnet per your setup.

Run the following commands to set up the storage interface.

echo DEVICE=bond0.103 > /etc/sysconfig/network-scripts/ifcfg-bond0.103
echo ONBOOT=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0.103
echo BOOTPROTO=none >> /etc/sysconfig/network-scripts/ifcfg-bond0.103
echo TYPE=Vlan >> /etc/sysconfig/network-scripts/ifcfg-bond0.103
echo VLAN=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0.103
echo IPADDR=10.0.104.11 >> /etc/sysconfig/network-scripts/ifcfg-bond0.103
echo NETMASK=255.255.255.0 >> /etc/sysconfig/network-scripts/ifcfg-bond0.103

Step 15: Restart Networking


Warning: Make sure you have console access to your host. You will be disconnected if the configuration is incorrect.

Run the following command to restart the network service.

systemctl restart network.service

Step 16: Create OVS Bridges

The number of OVS bridges you need will determine on how many physical networks your hosts connect to, and what types of networks you will be creating.

Let us look at some basic networking terminology before creating the bridges.

Each physical network corresponds to a trunk or access port (an individual NIC, or a pair of NICs bonded together) on the host. An Open vSwitch bridge must be created for each physical network.

When configuring Platform9 OpenStack’s Networking Config, each physical network is given a Label as a name, and that label mapped to a particular OVS bridge on the host during host authorization.

Let us look at two different examples of common host networking setups.

Example 1: Non-DVR setup with one external flat network, and trunk port for VLAN traffic

The following figure represents a non-DVR network setup with an external flat network, and a trunk port for VLAN traffic.


Figure 2. Non-DVR Network Setup

In Figure 2 above, the network has a trunk port consisting of eth0 and eth1 in a bond that will carry our VLAN based networks(tenant, provider), as well as a dedicated port (eth2) that connects to a separate external network. This is a legacy, non-DVR setup where external connectivity and L3 capability is only on network nodes. Nodes that are hypervisors only carry the tenant network traffic, and need just 1 OVS bridge.

Run the following commands to add OVS bridges on the hypervisors. The steps below assume eth0/eth1 have already been configured in a Linux bond called "bond0". Please refer to steps 10-15 to set up your physical interfaces.

ovs-vsctl add-br br-vlan
ovs-vsctl add-port br-vlan bond0

On our network node, we have a separate NIC that connects to a different physical network. For this, we need a separate OVS bridge.

Run the following commands to add an OVS bridge.

ovs-vsctl add-br br-ext
ovs-vsctl add-port br-ext eth2

Example 2: DVR setup with a pair of NICs in a bond

The following figure represents a DVR network setup with a pair of NICs in a bond.

legacy1extnet-2
Figure 3. DVR Network Setup

In the DVR setup seen in figure 3 above, every host has external L3 connectivity. Here, we only have a pair of NICs in a bond. Therefore this OVS Bridge can only support one Flat (untagged) network, and as many VLAN-tagged networks as your networking infrastructure allows. There are multiple external networks that are VLAN-based, in addition to our tenant networks.

Run the following commands to add an OVS bridge on all hosts.

Note: The steps below assume eth0/eth1 have already been configured in a Linux bond called "bond0". Please refer to steps 10-14 to set up your physical interfaces.

ovs-vsctl add-br br-vlan
ovs-vsctl add-port br-vlan bond0


April 03, 2017