# Qbert API

## Bootstrapping Cluster with Luigi NetworkOperator via API

### Qbert-API Calls

In PMK version 4.5, several new entries have been added to the qbert-api including:

**ipv6 networkplugin**: calico\*\*,\*\* deployLuigiOperator\*\*,\*\* containersCidr\*\*,\*\* servicesCidr\*\*,\*\* calicoIPv6PoolCidr\*\*,\*\* privileged, calicoIPv4, calicoIPv6, calicoIPv6PoolNatOutgoing and calicoIPv6PoolBlockSize\*\_. \*\_These are discussed in further detail below.

* **ipv6**: This is the most important parameter. It triggers the cluster components to use IPv6 addressing for various Kubernetes components like CoreDNS, KubeProxy, Canal, API server etc. (valid values are 0, 1 or false/true) Setting ipv6 must also set calicoIPv6 and calicoIPv6PoolCidr (more on this below).
* **deployLuigiOperator:** this **boolean** value will allow you to deploy a cluster with Luigi NetworkOperator Installed
* **“networkplugin”: “calico”:** Platform9 supports Flannel and Calico network plugins, for the IPv6 only Calico is supported.
* **containersCidr & servicesCidr:** Please specify the IPv6 CIDR when setting the “ipv6”: 1. Additionally, if the ipv6 flag is set, the value populated in containersCidr must also be populated in calicoIPv6PoolCidr. Calico only supports a subnet mask greater than /112. Please make sure the CIDR specified is between /112 - /123. For example, fd00:101::/64 is an invalid value, but fd00:101::/112 is acceptable.
* **privileged:** This is a requirement for calico to run - so turning ipv6 on must turn this on automatically.
* **calicoIPv4 and calicoIPv6:** These are complimentary. If the **ipv6** flag is set to true, we need to set **calicoIPv4** to none and calicoIPv6 to\*\* autodetect and **vice versa if ipv6** is set to **false**. (valid values are none and autodetect).
* **calicoIPv6PoolNatOutgoing**: This is similar to the calicoNatOutgoing field that exists already. Turn it on if pod traffic leaving the host needs to be NAT’d. (valid values are 0/1)
* **calicoIPv6PoolBlockSize**: This is the block size to use for the IPv6 POOL created at startup. Block size for IPv6 should be in the range 116-128.
* **calicoIPv4DetectionMethod & calicoIPv6DetectionMethod** options:
  * first-found= Use the first valid IP address on the first enumerated interface (common known exceptions are filtered out, e.g. the docker bridge). It is not recommended using this if you have multiple external interfaces on your host.
  * can-reach= Use the interface determined by your host routing tables that will be used to reach the supplied destination IP or domain name.
  * interface= Use the first valid IP address found on interfaces named as per the first matching supplied interface name regex. Regexes are separated by commas (e.g. eth.*,enp0s.*).
  * skip-interface= Use the first valid IP address on the first enumerated interface (same logic as first-found above) that does NOT match with any of the specified interface name regexes. Regexes are separated by commas (e.g. eth.*,enp0s.*).

{% hint style="info" %}
**Info**

To deploy Luigi Operator as part of the bootstrap process via qbert-api, the only **networkPlugin entry that is allowed to be use is calico**.
{% endhint %}

## Python Payload Example

{% tabs %}
{% tab title="JSON" %}

```json
cluster_create_params = {
 "name":"test-call-NetworkOperator",
 "externalDnsName":"10.128.146.127",
 "containersCidr":"10.20.0.0/16",
 "servicesCidr":"10.21.0.0/16",
 "mtuSize":1440,
 "privileged":True,
 "appCatalogEnabled":False,
 "nodePoolUuid":"3361eead-a5ce-435d-b9f6-8f4dee0621aa",
 "calicoIpIpMode":"Always",
 "calicoNatOutgoing":True,
 "calicoV4BlockSize":"24",
 "calicoIPv4DetectionMethod":"first-found",
 "networkPlugin":"calico",
 "deployLuigiOperator":True,
 "runtimeConfig":"api/all=true",   "etcdBackup":{"storageType":"local","isEtcdBackupEnabled":1,"storageProperties":{"localPath":"/etc/pf9/etcd-backup"},"intervalInMins":1440},
    "tags":{"pf9-system:monitoring":"true"}
}
```

{% endtab %}
{% endtabs %}

## Python Snippet to Bootstrap Cluster

### Prerequisites

The easiest way to use this script is by deploying a virtual environment in a docker container, so please follow the next steps to set up the environment.

{% tabs %}
{% tab title="Bash" %}

```bash
docker run -i -t --name python-bootstrapper centos:centos7 bash
```

{% endtab %}
{% endtabs %}

Inside the container update packages and install python3 and python3-pip

{% tabs %}
{% tab title="Bash" %}

```bash
yum update -y
yum python3 python3-pip
```

{% endtab %}
{% endtabs %}

Create a virtual environment

{% tabs %}
{% tab title="Bash" %}

```bash
python3 -m venv pf9-cluster-virtualenv
```

{% endtab %}
{% endtabs %}

Activate virtual environment

{% tabs %}
{% tab title="Bash" %}

```bash
source pf9-cluster-virtualenv/bin/activate
```

{% endtab %}
{% endtabs %}

Requirements file

{% tabs %}
{% tab title="Bash" %}

```bash
cat <<EOF > requirements.txt
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
debtcollector==2.2.0
httplib2==0.18.1
idna==2.10
importlib-metadata==1.7.0
importlib-resources==3.0.0
iso8601==0.1.12
keystoneauth1==4.2.1
msgpack==1.0.0
netaddr==0.8.0
netifaces==0.10.9
oauth2client==4.1.3
os-service-types==1.7.0
oslo.config==8.3.1
oslo.i18n==5.0.0
oslo.serialization==4.0.0
oslo.utils==4.5.0
packaging==20.4
pbr==5.5.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
python-keystoneclient==4.1.0
pytz==2020.1
PyYAML==5.3.1
qbertclient==0.0.12
requests==2.24.0
rfc3986==1.4.0
rsa==4.6
six==1.15.0
stevedore==3.2.1
urllib3==1.25.10
wrapt==1.12.1
zipp==3.1.0
EOF
```

{% endtab %}
{% endtabs %}

Install module requirements.

{% tabs %}
{% tab title="Bash" %}

```bash
pip install requirements.txt
```

{% endtab %}
{% endtabs %}

### Create python bootstrap script

Create a python deploy script and update the following parameters

* \*\* DU\_NAME
* *
* **TENANT\_NAME**
* **TENANT\_ID**
* **USER**
* **PASSWORD**
* **NODE\_POOL**
* **MASTER\_NODE\_ID**
* **WORKER1\_NODE\_ID**
* **WORKER2\_NODE\_ID**

{% tabs %}
{% tab title="Bash" %}

```bash
cat <<EOF > cluster_deploy_with_luigi.py
from qbertclient import qbert
from qbertclient import keystone
import time
import requests
import json
du_fqdn = "DU_NAME"
username ="USERt"
password = "PASSWORD"
project_name = "TENANT_NAME"
keystone = keystone.Keystone(du_fqdn, username, password, project_name)
token = keystone.get_token()
project_id = keystone.get_project_id(project_name)
headers = {'X-Auth-Token' : token,'Content-Type': "application/json"}
cluster_create_params = {
    "name":"test-cal",
 "externalDnsName":"10.128.146.127",
 "containersCidr":"10.20.0.0/16",
 "servicesCidr":"10.21.0.0/16",
 "mtuSize":1440,
 "privileged":True,
 "appCatalogEnabled":False,
 "nodePoolUuid":"NODE_POOL",
 "calicoIpIpMode":"Always",
 "calicoNatOutgoing":True,
 "calicoV4BlockSize":"24",
 "calicoIPv4DetectionMethod":"first-found",
 "networkPlugin":"calico",
 "deployLuigiOperator":True,
 "runtimeConfig":"api/all=true",   "etcdBackup":{"storageType":"local","isEtcdBackupEnabled":1,"storageProperties":{"localPath":"/etc/pf9/etcd-backup"},"intervalInMins":1440},
    "tags":{"pf9-system:monitoring":"true"}
}
cluster_create = requests.post('https://DU_NAME/qbert/v3/TENANT_ID/clusters',data=json.dumps(cluster_create_params),headers = headers)
if cluster_create.status_code == 200:
     cc_json = json.loads(cluster_create.text)
     new_cluster_uuid = cc_json["uuid"]
     print(new_cluster_uuid)
node_list3 = [{
        "uuid": MASTER_NODE_UUID",
        "isMaster": True
        },
         {
        "uuid": "WORKER1_NODE_UUID",
            "isMaster": False
        },
        {
        "uuid": "WORKER2_NODE_UUID",
            "isMaster": False
        }
    ]
attach_nodes = requests.post('https://DU_NAME/qbert/v3/{}/clusters/{}/attach'.format(project_id,new_cluster_uuid),data=json.dumps(node_list3),headers = headers)
print(attach_nodes.status_code)
if attach_nodes.status_code == 200:
    print("Nodes are attachted")

EOF
```

{% endtab %}
{% endtabs %}

### Create New Cluster via API

{% tabs %}
{% tab title="Bash" %}

```bash
python cluster_deploy_with_luigi.py
```

{% endtab %}
{% endtabs %}

## Tips

## MacVLAN

When declaring the *network attach* definitions, the master section cannot use the same physical/virtual/VLAN interface of another network-attach-definition that is being used for IPvlan.

## IPvlan

In order for kubelet to create pods with IPvlan interface types Kernel version 4.1+ should be installed across all the nodes of the cluster, please follow the instructions to install Kernel 4.1+ on CentOS7

{% tabs %}
{% tab title="Bash" %}

```bash
$ rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
$ rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
$ yum list available --disablerepo='*' --enablerepo=elrepo-kernel
$ yum --enablerepo=elrepo-kernel install kernel-lt
$ reboot
# SELECT NEW INSTALLED KERNEL TO BOOT WITH
```

{% endtab %}
{% endtabs %}

{% tabs %}
{% tab title="Bash" %}

```bash
$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.4.237-1.el7.elrepo.x86_64 root=UUID=6cd50e51-cfc6-40b9-9ec5-f32fa2e4ff02 ro console=tty0 console=ttyS0,115200n8 crashkernel=auto net.ifnames=0 console=ttyS0 LANG=en_US.UTF-8 intel_iommu=on iommu=pt
```

{% endtab %}
{% endtabs %}

## Kube-sriov-device-plugin

A known issue with the sriov-device-plugin pod that runs on every node, is that if you make a change to a *hostconfig* object that matches a resource definition in your *sriov-config* map, that links to a sriov networkattachdefinition, the allocatable resources will not change. In order for the sriov-device-plugin pod to re-read the new VFs resources and update the networkattach definition allocatable resources, the sriov-device-plugin pod needs to be recreated by deleting the pod and letting the [daemonset take care of it](https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin/issues/276).

## SRIOV - DPDK

NetworkManager needs to be disabled, since NetworkManager uses auto DHCP for all the Virtual Functions.

Due to the way VFIO Driver works, there are certain limitations to which devices can be used with VFIO. Mainly it comes down to how the IOMMU groups work. Any Virtual Function device can be used with VFIO on its own, but physical devices will require either all ports bound to VFIO, or some of them bound to VFIO, while others not being bound to anything at all. If your device is behind a PCI-to-PCI bridge, the bridge will then become part of the IOMMU group in which your device is in. Therefore, the bridge driver should also be unbound from the bridge PCI device for VFIO to work with devices behind the bridge.

IPAM not valid for DPDK enabled networks, see [SRIOV-CNI section](https://github.com/intel/sriov-cni) on DPDK:

In order for the test DPDK application to work successfully, you need *hugepages* enabled at the host level. Enable it on CentOS7 by editing the */etc/default/grub* and add the following [kernel boot parameters](https://github.com/openshift/sriov-network-device-plugin/blob/master/docs/dpdk/README.md) to enable IOMMU, and then create 8 GB of 2 MB size hugepages. The following info denotes this example.

{% tabs %}
{% tab title="Bash" %}

```bash
GRUB_CMDLINE_LINUX="nofb nomodeset vga=normal iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G hugepages=16"
#Rebuild grub.cfg
grub2-mkconfig -o /boot/grub2/grub.cfg && reboot
```

{% endtab %}
{% endtabs %}

## Repurpose Worker Nodes for a New Cluster

To repurpose a worker node once it has been dissociated from the cluster, you should perform the following commands to fully clean the node.

{% tabs %}
{% tab title="Bash" %}

```bash
yum erase -y pf9-hostagent && rm -rf /opt/pf9 && rm -rf /etc/pf9 && rm -rf /var/opt/pf9 && rm -rf /var/log/pf9 && rm -rf /tmp/* && rm -rf /var/spool/mail/pf9 && rm -rf /opt/cni/bin/* && rm -rf /etc/cni && rm -rf /var/log/messages-* && rm -rf /var/lib/docker && yum clean all && rm -rf /opt/cni && rm -rf /etc/cni
```

{% endtab %}
{% endtabs %}

## References

## SR-IOV - DPDK Drivers

<https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html>

<https://github.com/ceph/dpdk/blob/master/tools/dpdk-devbind.py>

## NetworkAttachDefinition Examples

<https://github.com/intel/sriov-network-device-plugin#configurations>
