Preparing Ubuntu for Neutron
This OpenStack tutorial describes prerequisites to prepare your servers to leverage OpenStack Neutron with Ubuntu.
For a general description of Neutron networking concepts, refer to the tutorial Networking with OpenStack Neutron Basic Concepts. Refer to Platform9 Managed OpenStack prerequisites for Linux/KVM for hardware requirements for Platform9 Managed OpenStack.
Prepare Your Linux/KVM Physical Servers for Neutron
The following image (Figure 1) represents three hypervisors connected in a Managed OpenStack Neutron network.
In order to run a Managed OpenStack Neutron network, each of your physical hypervisors and the Neutron network nodes must be prepared with following steps.
Step 1: Install, enable, and start the NTP Daemon
The NTP daemon is required for all components to have their time synchronized.
Run the following commands to install, enable and start the NTP daemon.
bash apt-get install -y ntp service ntp start
Step 2: Install required packages and load the modules needed for Neutron
Run the following commands to load the modules needed for Neutron.
bash apt-get update apt-get install -y dnsmasq arping conntrack ifenslave vlan software-properties-common modprobe bridge modprobe br_netfilter modprobe 8021q modprobe bonding echo bridge >/etc/modules-load.d/pf9.conf echo 8021q >> /etc/modules-load.d/pf9.conf echo bonding >> /etc/modules-load.d/pf9.conf echo br_netfilter >> /etc/modules-load.d/pf9.conf
Step 3: Add sysctl options
Run the following commands to add sysctl options.
bash echo net.ipv4.conf.all.rp_filter=0 >> /etc/sysctl.conf echo net.ipv4.conf.default.rp_filter=0 >> /etc/sysctl.conf echo net.bridge.bridge-nf-call-iptables=1 >> /etc/sysctl.conf echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf echo net.ipv4.tcp_mtu_probing=2 >> /etc/sysctl.conf sysctl -p
Step 4: Add the Platform9 APT Repository
Run the following command to install the Platform9 APT repository that contains the latest Open vSwitch version.
bash add-apt-repository 'deb [trusted=yes] http://platform9-neutron.s3-website-us-west-1.amazonaws.com ubuntu_latest/'
Run the following command to install the LTS version.
bash add-apt-repository 'deb [trusted=yes] http://platform9-neutron.s3-website-us-west-1.amazonaws.com ubuntu/'
Step 5: Install Open vSwitch
Run the following command to install Open vSwitch.
bash # Ubuntu 14.04 apt-get -y --force-yes install openvswitch-switch # Ubuntu 16.04 and higher apt-get -y install openvswitch-switch
Step 6: Enable and start Open vSwitch
Run the following commands to enable and start Open vSwitch.
bash # Ubuntu 14.04 update-rc.d openvswitch-switch defaults service openvswitch-switch start # Ubuntu 16.04 and higher systemctl enable openvswitch-switch.service systemctl start openvswitch-switch.service
Step 7: Install Router Advertisement Daemon
Run the following command to install the Router Advertisement Daemon.
bash apt-get -y install radvd
Step 8: Configure physical interfaces
Figure 1 in the article represents a sample Neutron network configuration. Steps 7 through 11 are based on the configuration shown in Figure 1 from the article. Steps 7 through 11 describe the configuration of physical interfaces into a Linux bond, addition of VLAN interfaces for management, VXLAN/GRE network traffic and storage. You may or may not require one or more of these steps. The steps to follow would be based on your Neutron network configuration. For instance, if you do not plan on using VXLAN/GRE, you can skip the step to set up VXLAN/GRE tunneling interface.
Add the following to /etc/network/interfaces
bash auto eth0 iface eth0 inet manual bond-master bond0 auto eth1 iface eth1 inet manual bond-master bond0 auto bond0 iface bond0 inet manual bond-mode 802.3ad bond-lacp-rate 1 bond-slaves eth0 eth1
Step 9: Set up the Bond interface
Add the following to /etc/network/interfaces to create the bond.
auto bond0 iface bond0 inet manual bond-mode 802.3ad bond-slaves eth0 eth1
Step 10: Setup the Management interface
bash auto bond0.101 iface bond0.101 inet static address 192.0.2.10 netmask 255.255.255.0 gateway 192.0.2.1 dns-nameservers 192.0.2.100 192.0.2.200 dns-search pf9.example
Step 11: Setup the VXLAN/GRE tunneling interface (Optional)
GRE and VXLAN require 24 bytes and 50 bytes of overhead, respectively. Platform9 recommends at least a minimum MTU of 1600 to accommodate this overhead, with a 9000 byte MTU preferred.
Run the following commands to VXLAN/GRE tunneling interface.
bash auto bond0.102 iface bond0.102 inet static address 198.51.100.10 netmask 255.255.255.0 post-up ifconfig bond0 mtu 9000 post-up ifconfig bond0.102 mtu 9000
Step 12: Setup the Storage interface (Optional)
Run the following commands to set up the storage interface.
bash auto bond0.103 iface bond0.103 inet manual address 203.0.113.10 netmask 255.255.255.0 post-up ifconfig bond0 mtu 9000 post-up ifconfig bond0.103 mtu 9000
Step 13: Restart Networking
Run the following command to restart the network service.
bash ifdown -a ifup -a ifup -a --allow=ovs
Step 14: Create OVS Bridges
The number of OVS bridges you would need depend on how many physical networks your hosts connect to, and what types of networks you would create.
Let us look at some basic networking terminology before creating the bridges.
- An access port represents a single “flat” physical network or VLAN, and carries untagged traffic.
- A trunk port logically groups together multiple VLANs. An 802.1Q “QTag” header is inserted into the Ethernet frame for all VLAN traffic. All untagged traffic is implicitly assigned a default, native VLAN per your data center’s switch configuration.
When configuring Platform9 OpenStack’s Networking Config, each physical network is given a Label as a name, and that label mapped to a particular OVS bridge on the host during host authorization.
Let us look at two different examples of common host networking setups.
Example 1: Non-DVR setup with one external flat network, and trunk port for VLAN traffic
The following figure (Figure 2) represents a non-DVR network setup with an external flat network, and a trunk port for VLAN traffic.
In Figure 2 above, the network has a trunk port consisting of eth0 and eth1 in a bond that will carry our VLAN-based networks (tenant, provider), as well as a dedicated port (eth2) that connects to a separate external network. This is a legacy, non-DVR setup where external connectivity and L3 capability is only on network nodes. Nodes that are hypervisors only carry the tenant network traffic, and need just 1 OVS bridge.
Run the following commands to add OVS bridges on the hypervisors. The steps below assume eth0/eth1 have already been configured in a Linux bond called “bond0”. Please refer to steps 7-11 to set up your physical interfaces.
bash ovs-vsctl add-br br-vlan ovs-vsctl add-port br-vlan bond0
On the network node, we have a separate NIC that connects to a different physical network. For this, we need a separate OVS bridge.
Run the following commands to add an OVS bridge.
bash ovs-vsctl add-br br-ext ovs-vsctl add-port br-ext eth2
Example 2: DVR setup with a pair of NICs in a bond
The following figure (Figure 3) represents a DVR network setup with a pair of NICs in a bond.
In the DVR setup seen in figure 3 above, every host has external L3 connectivity. Here, we only have a pair of NICs in a bond. Therefore, the OVS Bridge can only support one Flat (untagged) network, and as many VLAN-tagged networks as your networking infrastructure allows. There are multiple external networks that are VLAN-based, in addition to our tenant networks.
Run the following commands to add an OVS bridge on all hosts.
bash ovs-vsctl add-br br-vlan ovs-vsctl add-port br-vlan bond0