Prepare CentOS Server
This article describes how prepare a CentOS physical server to be added as a host to Platform9 Managed OpenStack (PMO) cloud.
For a description of PMO networking concepts, refer to the Networking Basic Concepts tutorial. Refer to PMO prerequisites for Linux/KVM for systems requirements and supported CentOS Operating System versions.
Supported Operating System Version
Platform9 Managed OpenStack supports CentOS 7+ (64-bit).
Step 1 - Install CentOS Operating System
Make sure that your server is configured appropriately with access to storage and physical networking. Download and install CentOS on your physical server. You can download CentOS distributions from here: http://wiki.centos.org/Download
It’s usually a good practice to get your system up to date with regard to the latest patches and updates.
sudo yum -y update
Step 2 - Ensure Virtualization is Enabled
Ensure that virtualization is enabled for your server by checking your server’s BIOS settings. If disabled, enable virtualization for the server to be able to act as a hypervisor within Platform9 Managed OpenStack.
Step 3 - Ensure the System Clock is Synchronized
Your host will fail to authenticate with Platform 9 services if its date / time settings are incorrect. Type
date to verify that the current date and time are correct. If they aren’t, then one possible fix is to enable the network time protocol daemon service:
sudo yum -y install ntp chkconfig --add ntpd service ntpd start
The following image (Figure 1) represents three hypervisors connected in a Managed OpenStack Neutron network.
Step 4 - Set SELinux to permissive
This is required for Open vSwitch (OVS) to be able to manage networking.
Run the following commands to set SELinux to permissive.
sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config setenforce 0
Step 5 - Disable Firewalld and NetworkManager
This is required for KVM and OVS to be able to create iptables rules directly without firewalld getting in the way.
Run the following commands to disable firewalld and NetworkManager.
systemctl disable firewalld systemctl stop firewalld systemctl disable NetworkManager systemctl stop NetworkManager
Step 6 - Enable Network
Run the following command to enable network.
systemctl enable network
Step 7 - Load the modules needed for Neutron
Run the following commands to load the modules needed for Neutron.
modprobe bridge modprobe 8021q modprobe bonding modprobe tun modprobe br_netfilter echo bridge > /etc/modules-load.d/pf9.conf echo 8021q >> /etc/modules-load.d/pf9.conf echo bonding >> /etc/modules-load.d/pf9.conf echo tun >> /etc/modules-load.d/pf9.conf echo br_netfilter >> /etc/modules-load.d/pf9.conf
Step 8 - Add sysctl options
Run the following commands to add sysctl options.
echo net.ipv4.conf.all.rp_filter=0 >> /etc/sysctl.conf echo net.ipv4.conf.default.rp_filter=0 >> /etc/sysctl.conf echo net.bridge.bridge-nf-call-iptables=1 >> /etc/sysctl.conf echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf echo net.ipv4.tcp_mtu_probing=2 >> /etc/sysctl.conf sysctl -p
Step 9 - Add the Platform9 YUM Repo
Run the following command to install the Platform9 YUM repository.
yum -y install https://s3-us-west-1.amazonaws.com/platform9-neutron/noarch/platform9-neutron-repo-1-0.noarch.rpm
Step 10 - Install Open vSwitch
Run the following command to get a list of available Open vSwitch versions.
yum --showduplicates list openvswitch
If you wish to install an available older/LTS version, run the following command.
yum install openvswitch-<version>
Alternatively, if you wish to install the latest version, run the following command.
yum install openvswitch
Step 11 - Enable and start Open vSwitch
Run the following commands to enable and start Open vSwitch.
systemctl enable openvswitch systemctl start openvswitch
Step 12 - Install QEMU KVM EV
Run the following commands to install QEMU KVM EV
yum install centos-release-qemu-ev yum install qemu-kvm-ev
Step 13 - Install Router Advertisement Daemon
Run the following command to install radvd
yum -y install radvd
Step 14 - Configure physical interfaces
Figure 1 in the article represents a sample Neutron network configuration. Steps 10 through 14 are based on the configuration shown in Figure 1 from the article. Steps 10 through 14 describe the configuration of physical interfaces into a Linux bond, addition of VLAN interfaces for management, VXLAN/GRE network traffic and storage. You may or may not require one or more of these steps. The steps to follow would be based on your Neutron network configuration. For instance, if you do not plan on using VXLAN/GRE, you can skip the step to set up VXLAN/GRE tunneling interface.
We assume the interface names to be eth0 and eth1.
Substitute eth0 and eth1 with the appropriate interface names per your setup, when you run the commands given for this step.
Similarly, we are assuming an MTU of 9000 - VXLAN requires an MTU of at least 1600.
Make sure all physical switches are configured to handle the MTU of 1600 or you might have problems.
Run the following commands to configure physical interfaces.
DEVICE=eth0 ONBOOT=yes BOOTPROTO=none MTU=9000 MASTER=bond0 SLAVE=yes
DEVICE=eth1 ONBOOT=yes BOOTPROTO=none MTU=9000 MASTER=bond0 SLAVE=yes
Step 15 - Set up the Bond interface
Run the following commands to set up the bond interface.
DEVICE=bond0 ONBOOT=yes BONDING_MASTER=yes BONDING_OPTS="mode=4" MTU=9000
Step 16 - Setup the Management interface
DEVICE=bond0.101 ONBOOT=yes BOOTPROTO=none TYPE=Vlan VLAN=yes IPADDR=192.0.2.10 NETMASK=255.255.255.0 GATEWAY=192.0.2.1 DNS1=192.0.2.100 DNS2=192.0.2.200
Step 17 - Setup the VXLAN/GRE tunneling interface (Optional)
Run the following commands to VXLAN/GRE tunneling interface.
DEVICE=bond0.102 ONBOOT=yes BOOTPROTO=none TYPE=Vlan VLAN=yes IPADDR=198.51.100.10 NETMASK=255.255.255.0
Step 18 - Setup the Storage interface (Optional)
Run the following commands to set up the storage interface.
DEVICE=bond0.103 ONBOOT=yes BOOTPROTO=none TYPE=Vlan VLAN=yes IPADDR=203.0.113.10 NETMASK=255.255.255.0
Step 19 - Restart Networking
Run the following command to restart the network service.
systemctl restart network.service
Step 20 - Create OVS Bridges
The number of OVS bridges you need will determine on how many physical networks your hosts connect to, and what types of networks you will be creating.
Let us look at some basic networking terminology before creating the bridges.
- An access port represents a single flat physical network or VLAN, and will carry untagged traffic.
- A trunk port logically groups together multiple VLANs. An 802.1Q QTag header will be inserted into the Ethernet frame for all VLAN traffic. All untagged traffic is implicitly assigned a default, native VLAN per your data center’s switch configuration.
Each physical network corresponds to a trunk or access port (an individual NIC, or a pair of NICs bonded together) on the host. An Open vSwitch bridge must be created for each physical network.
When configuring Platform9 OpenStack’s Networking Config, each physical network is given a Label as a name, and that label mapped to a particular OVS bridge on the host during host authorization.
Let us look at two different examples of common host networking setups.
Example 1: Non-DVR setup with one external flat network, and trunk port for VLAN traffic
The following figure (Figure 2) represents a non-DVR network setup with an external flat network, and a trunk port for VLAN traffic.
In the above figure 2, the network has a trunk port consisting of eth0 and eth1 in a bond that will carry our VLAN based networks(tenant, provider), as well as a dedicated port (eth2) that connects to a separate external network. This is a legacy, non-DVR setup where external connectivity and L3 capability is only on network nodes. Nodes that are hypervisors only carry the tenant network traffic, and need just 1 OVS bridge.
Run the following commands to add OVS bridges on the hypervisors. The steps below assume eth0/eth1 have already been configured in a Linux bond called bond0. Please refer to steps 10-15 to set up your physical interfaces.
ovs-vsctl add-br br-vlan ovs-vsctl add-port br-vlan bond0
On our network node, we have a separate NIC that connects to a different physical network. For this, we need a separate OVS bridge.
Run the following commands to add an OVS bridge.
ovs-vsctl add-br br-ext ovs-vsctl add-port br-ext eth2
Example 2: DVR setup with a pair of NICs in a bond
The following figure (Figure 3) represents a DVR network setup with a pair of NICs in a bond.
In the DVR setup seen in figure 3 above, every host has external L3 connectivity. Here, we only have a pair of NICs in a bond. Therefore this OVS Bridge can only support one Flat (untagged) network, and as many VLAN-tagged networks as your networking infrastructure allows. There are multiple external networks that are VLAN-based, in addition to our tenant networks.
Run the following commands to add an OVS bridge on all hosts.
ovs-vsctl add-br br-vlan ovs-vsctl add-port br-vlan bond0
At this point your CentOS server is ready to be added as a hypervisor to Platform9 Managed OpenStack (PMO).
Was this article helpful?
Thank you for the feedback.