# Luigi Network Operator Quickstart

### Prerequisites <a href="#prerequisites" id="prerequisites"></a>

* Kubernetes v1.17+
* Calico CNI
* Privileged Pods
* MetalLB is **not** deployed.
* Every worker node should have an additional NIC/port up and running with no IP address assigned to it, if intended for IPVLAN/L2 and MACVLAN, or an additional interface with an IP address for IPVLAN/L3.

#### Luigi Installation <a href="#luigi-installation" id="luigi-installation"></a>

Download the Luigi Operator definition to your master node or from where you are able to execute `kubectl` commands against this cluster. (A local copy of the manifest is at the end of the document.)

Then install it by using the following command: &#x20;

```
$ kubectl apply -f luigi-plugins-operator.yaml
```

#### Luigi Install Validation <a href="#luigi-install-validation" id="luigi-install-validation"></a>

After executing the command above, validate the installation using the following command. Please consider that some of the pods may take some time to fully be up.

{% tabs %}
{% tab title="Bash" %}

```
pf9-0102:~ carlos$  kubectl get all -n luigi-system
NAME                                            READY   STATUS    RESTARTS   AGE
pod/luigi-controller-manager-74bdbf9cc9-2g2tf   2/2     Running   0          41h

NAME                                               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/luigi-controller-manager-metrics-service   ClusterIP   10.168.113.206   <none>        8443/TCP   41h

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/luigi-controller-manager   1/1     1            1           41h

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/luigi-controller-manager-74bdbf9cc9   1         1         1       41h
```

{% endtab %}
{% endtabs %}

### Luigi Networking Plugins <a href="#luigi-networking-plugins" id="luigi-networking-plugins"></a>

#### Luigi Networking Plugins Installation (hostPlumber) <a href="#luigi-networking-plugins-installation-hostplumber" id="luigi-networking-plugins-installation-hostplumber"></a>

The `sampleplugins.yaml` manifest will deploy CNI plugins. The different components are going to deployed as daemon sets and will run in every node including workers and masters.

Create a namespace `hostplumber` for the resources related to the hostplumber pod.

```
kubectl create namespace hostplumber
```

Apply the following YAML (uncomment as applicable to your environment) to choose and install the required Luigi plugins.

{% tabs %}
{% tab title="sampleplugins.yaml" %}

```yaml
apiVersion: plumber.k8s.pf9.io/v1
kind: NetworkPlugins
metadata:
  name: networkplugins-sample11
spec:
  # Add fields here
  plugins:
    hostPlumber: 
      namespace: hostplumber
    	#hostPlumberImage: "platform9/luigi-plumber:v0.1.0"
    #nodeFeatureDiscovery: {}
    #multus: {}
    #whereabouts: {}
    # COMMENT/UNCOMMENT - requires all host networking/VFs to be configured first
    #sriov: {}
```

{% endtab %}
{% endtabs %}

As you may have observed, all the plugins are commented out but **HostPlumber.** First, we will install the `HostPlumber` plugin and revisit this manifest later and re-apply it with the rest of the CNI plugins uncommented after creating our first **HostNetworkTemplate** object. `HostPlumber` is not required, but it provides a mechanism to configure SR-IOV and host networking, as well as view the node SR-IOV and interface state via a K8s operator. If you’re configuring host networking and SR-IOV VFs thru other means, the `HostPlumber` plugin is not needed and can be skipped.

The reason we deploy this first, is that the SR-IOV CNI requires VF and network configuration to be done first before deploying the SR-IOV plugin.

Execute the following command to install **HostPlumber** plugin:

```
$ kubectl  apply -f sampleplugins.yaml
```

#### Luigi Networking Plugins Install Validation (hostPlumber) <a href="#luigi-networking-plugins-install-validation-hostplumber" id="luigi-networking-plugins-install-validation-hostplumber"></a>

After executing the command above, let’s review our work.

{% tabs %}
{% tab title="JavaScript" %}

```
# pf9-0102:~ carlos$ kubectl get ds -n kube-system
NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                   AGE
calico-node                      4         4         4       4            4           kubernetes.io/os=linux          2d9h
hostconfig-controller-manager    4         4         4       4            4           <none>                          41h
```

{% endtab %}
{% endtabs %}

#### Luigi HostNetworkTemplate Object <a href="#luigi-hostnetworktemplate-object" id="luigi-hostnetworktemplate-object"></a>

`HostNetworkTemplate` will help us enable the desired number of VFs in our nodes as well which driver will be loaded on the newly enabled VFs.

The **nodeSelector** feature performs SR-IOV configurations only on SR-IOV capable nodes.

{% hint style="info" %}
**Note**: NFD plugin (Node Feature Discovery) should be installed first in order to leverage the label: **`feature.node.kubernetes.io/network-sriov.capable: "true"`.**
{% endhint %}

**For example:**

{% tabs %}
{% tab title="YAML" %}

```yaml
apiVersion: plumber.k8s.pf9.io/v1
kind: HostNetworkTemplate
metadata:
  name: sriovconfig-enp3s0f1
spec:
  # Add fields here
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true"
    testlabelA: "123"
  sriovConfig:
    - pfName: enp3s0f1
      numVfs: 8
      vfDriver: vfio-pci
      mtu: 9000
---
apiVersion: plumber.k8s.pf9.io/v1
kind: HostNetworkTemplate
metadata:
  name: sriovconfig-enp3s0f0
spec:
  # Add fields here
  sriovConfig:
    - pfName: enp3s0f0
      numVfs: 4
      vfDriver: ixgbev
```

{% endtab %}
{% endtabs %}

In this definition, we are creating two `HostNetworkTemplates` – one for SR-IOV using `vfio-pci` drivers and the other one SR-IOV using kernel drivers. The first one will only configure the VFs on nodes that has the following **3** criteria: **SR-IOV capable, a PF called `enp3s0f1` and a label with key-value to `testlabelA: "123"`**.

**The second will attempt to configure 4 VFs with the `ixgbevf` drive across all the nodes no matter what labels they have.**

It is also possible to merge the two `sriovConfig` ’s for the two interfaces into the same `HostNetworkTemplate` CRD, rather than a separate one for each PF. For example:

{% tabs %}
{% tab title="YAML" %}

```yaml
apiVersion: plumber.k8s.pf9.io/v1
kind: HostNetworkTemplate
metadata:
  name: HostNetworkTemplate-kernel-enp3
spec:
  # Add fields here
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true"
    testlabelA: "123"
  sriovConfig:
    - pfName: enp3s0f1
      numVfs: 8
      vfDriver: ixgbevf
      mtu: 9000
    - pfName: enp3s0f0
      numVfs: 4
      vfDriver: ixgbevf
```

{% endtab %}
{% endtabs %}

#### Configuring via vendor and device ID <a href="#configuring-via-vendor-and-device-id" id="configuring-via-vendor-and-device-id"></a>

The above will search for all interfaces matching vendor ID `8086` (Intel) and device ID `1528` (representing a particular model of NIC). It will then create 32 VFs on each matching device and bind all of them to the `vfio-pci` (DPDK driver). This might be useful if you don’t know the interface naming scheme across your hosts or PCI addresses, but you just want to target all particular NIC by vendor and device ID.

{% tabs %}
{% tab title="YAML" %}

```yaml
apiVersion: plumber.k8s.pf9.io/v1
kind: HostNetworkTemplate
metadata:
  name: HostNetworkTemplate-1528-dev
spec:
  # Add fields here
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true"
    testlabelA: "123"
  sriovConfig:
    - vendorId: "8086"
      deviceId: "1528"
      numVfs: 32
      vfDriver: vfio-pci
```

{% endtab %}
{% endtabs %}

#### Configuring via PCI address <a href="#configuring-via-pci-address" id="configuring-via-pci-address"></a>

{% tabs %}
{% tab title="YAML" %}

```yaml
apiVersion: plumber.k8s.pf9.io/v1
kind: HostNetworkTemplate
metadata:
  name: HostNetworkTemplate-sample
spec:
  # Add fields here
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true"
    testlabelA: "123"
  sriovConfig:
    - pciAddr: 0000:03:00.0
      numVfs: 32
      vfDriver: vfio-pci
    - pciAddr: 0000:03:00.1
      numVfs: 32
      vfDriver: vfio-pci
```

{% endtab %}
{% endtabs %}

The above will configure 32 VFs on PF matching PCI address “0000:03:00.0” and 32 VFs on PCI address “0000:03.00.1”, for a total of 64 VFs, and bind each VF to the vfio-pci driver.

#### HostNetwork CRD <a href="#hostnetwork-crd" id="hostnetwork-crd"></a>

This is not to be confused with the status section of the `HostNetworkTemplate` CRD. (For latter phases, each Node will append its `nodename` and the success/failure of application of the `HostNetworkTemplate` policy to the `Status` section.)

The `HostNetwork` CRD, which is different, will not be created by the user. Instead, this is intended to be a read-only CRD and the DaemonSet operator on each node will discover and populate various host settings to this CRD:

* Created: First upon the HostPlumber plugin being deployed.
* Updated: After each application of the `HostNetworkTemplate` CRD.
* For Phase 3 and later: Monitored as a periodic task, updating and discovering host changes.

There will be one `HostNetwork` CRD automatically created for each node. The `Name` will correspond to the Node’s name, which in PMK is the IP.

{% tabs %}
{% tab title="Bash" %}

```
[root@arjunpmk-master ~]$ kubectl get HostNetwork -n luigi-system
NAME             AGE
10.128.144.14    28s
10.128.144.43    25s
10.128.237.203   27s
10.128.237.204   26s
```

{% endtab %}
{% endtabs %}

Fetching the first Node in -o yaml reveals:  &#x20;

{% tabs %}
{% tab title="Bash" %}

```
[root@arjunpmk-master ~]$ kubectl get HostNetwork 10.128.237.204 -n luigi-system -o yaml
apiVersion: plumber.k8s.pf9.io/v1
kind: HostNetwork
metadata:
  creationTimestamp: "2020-10-20T08:22:58Z"
  generation: 1
  name: 10.128.237.204
  namespace: luigi-system
  resourceVersion: "3751"
  selfLink: /apis/plumber.k8s.pf9.io/v1/namespaces/luigi-system/HostNetworks/10.128.237.204
  uid: 13bfa4ca-2546-43dc-8074-9943f054674e
spec:
  interfaceStatus:
  - currentSriovConfig:
      totalVfs: 63
    deviceId: "1528"
    mac: a0:36:9f:43:54:54
    mtu: 1500
    pciAddr: "0000:03:00.0"
    pfDriver: ixgbe
    pfName: enp3s0f0
    sriovEnabled: true
    vendorId: "8086"
  - currentSriovConfig:
      totalVfs: 63
    deviceId: "1528"
    mac: a0:36:9f:43:54:56
    mtu: 9000
    pciAddr: "0000:03:00.1"
    pfDriver: ixgbe
    pfName: enp3s0f1
    sriovEnabled: true
    vendorId: "8086"
status: {}
```

{% endtab %}
{% endtabs %}

This node just has one interface, `eth0`, using `virtio` driver. This node happens to be an OpenStack VM. Let’s look at a bare metal SR-IOV node.

{% tabs %}
{% tab title="YAML" %}

```yaml
[root@arjunpmk-master ~]# kubectl get HostNetwork 10.128.237.204 -n default -o yaml
apiVersion: plumber.k8s.pf9.io/v1
kind: HostNetwork
metadata:
  creationTimestamp: "2020-10-20T08:25:29Z"
  generation: 1
  name: 10.128.237.204
  namespace: luigi-system
  resourceVersion: "4273"
  selfLink: /apis/plumber.k8s.pf9.io/v1/namespaces/luigi-system/HostNetworks/10.128.237.204
  uid: f8b51add-c014-4a91-893c-1c0016ce94de
spec:
  interfaceStatus:
  - currentSriovConfig:
      numVfs: 4
      totalVfs: 63
      vfs:
      - id: 0
        mac: "00:00:00:00:00:00"
        pciAddr: "0000:03:10.0"
        qos: 0
        spoofchk: true
        trust: false
        vfDriver: ixgbevf
        vlan: 0
      - id: 1
        mac: "00:00:00:00:00:00"
        pciAddr: "0000:03:10.2"
        qos: 0
        spoofchk: true
        trust: false
        vfDriver: ixgbevf
        vlan: 0
      - id: 2
        mac: "00:00:00:00:00:00"
        pciAddr: "0000:03:10.4"
        qos: 0
        spoofchk: true
        trust: false
        vfDriver: ixgbevf
        vlan: 0
      - id: 3
        mac: "00:00:00:00:00:00"
        pciAddr: "0000:03:10.6"
        qos: 0
        spoofchk: true
        trust: false
        vfDriver: ixgbevf
        vlan: 0
    deviceId: "1528"
    mac: a0:36:9f:43:54:54
    mtu: 1500
    pciAddr: "0000:03:00.0"
    pfDriver: ixgbe
    pfName: enp3s0f0
    sriovEnabled: true
    vendorId: "8086"
  - currentSriovConfig:
      numVfs: 4
      totalVfs: 63
      vfs:
      - id: 0
        mac: "00:00:00:00:00:00"
        pciAddr: "0000:03:10.1"
        qos: 0
        spoofchk: true
        trust: false
        vfDriver: vfio-pci
        vlan: 0
      - id: 1
        mac: "00:00:00:00:00:00"
        pciAddr: "0000:03:10.3"
        qos: 0
        spoofchk: true
        trust: false
        vfDriver: vfio-pci
        vlan: 0
      - id: 2
        mac: "00:00:00:00:00:00"
        pciAddr: "0000:03:10.5"
        qos: 0
        spoofchk: true
        trust: false
        vfDriver: vfio-pci
        vlan: 0
      - id: 3
        mac: "00:00:00:00:00:00"
        pciAddr: "0000:03:10.7"
        qos: 0
        spoofchk: true
        trust: false
        vfDriver: vfio-pci
        vlan: 0
    deviceId: "1528"
    mac: a0:36:9f:43:54:56
    mtu: 9000
    pciAddr: "0000:03:00.1"
    pfDriver: ixgbe
    pfName: enp3s0f1
    sriovEnabled: true
    vendorId: "8086"
status: {}
```

{% endtab %}
{% endtabs %}

There are 2 interfaces. Each of them has 4 VFs. For each PF, and each VF underneath, you can see the device and vendor IDs, PCI addresses, driver enabled, MAC address allocated (if assigned to a Pod), VLAN, and other Link level information.

In future phases, more information will be reported such as IP information, ip-route information, and other networking related host-state.

#### Validate Luigi HostNetworkTemplate Object (hostPlumber)  <a href="#validate-luigi-hostnetworktemplate-object-hostplumber" id="validate-luigi-hostnetworktemplate-object-hostplumber"></a>

{% tabs %}
{% tab title="Bash" %}

```
$ pf9-0102:DPDK carlos$ kubectl get hostnetworktemplate
NAME                            AGE
sriovconfig-enp3s0f0-kernel     13s
sriovconfig-enp3s0f1-vfio-pci   13s
```

{% endtab %}
{% endtabs %}

#### Luigi Networking Plugins Installation (multus, sriov, whereabouts, and nodeFeatureDiscovery) <a href="#luigi-networking-plugins-installation-multus-sriov-whereabouts-and-nodefeaturediscovery" id="luigi-networking-plugins-installation-multus-sriov-whereabouts-and-nodefeaturediscovery"></a>

Let’s revisit **`sampleplugins.yaml`**

Create a namespace `hostplumber` for the resources related to the hostplumber pod.

```
kubectl create namespace hostplumber
```

Uncomment the rest of the lines as appropriate for your environment and reapply the manifest, this will install Multus, SR-IOV and Whereabouts.

```
$ kubectl apply -f sampleplugins.yaml
```

{% tabs %}
{% tab title="YAML" %}

```yaml
apiVersion: plumber.k8s.pf9.io/v1
kind: NetworkPlugins
metadata:
  name: networkplugins-sample11
spec:
  # Add fields here
  plugins:
    hostPlumber: 
      namespace: hostplumber
      #hostPlumberImage: "platform9/luigi-plumber:v0.1.0"
    nodeFeatureDiscovery: {}
    multus: {}
    whereabouts: {}
    # COMMENT/UNCOMMENT - requires all host networking/VFs to be configured first
    sriov: {}
```

{% endtab %}
{% endtabs %}

Validate Luigi Networking Plugins Installation (Multus, SR-IOV, Whereabouts and `nodeFeatureDiscovery` ).

Let’s review our work by listing the new DaemonSets created in the **`kube-system`** namespace.

```
$ pf9-0102:DPDK carlos$ kubectl get ds -n kube-system
NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                   AGE
calico-node                      4         4         4       4            4           kubernetes.io/os=linux          2d10h
hostconfig-controller-manager    4         4         4       4            4           <none>                          42h
kube-multus-ds-amd64             4         4         4       4            4           kubernetes.io/arch=amd64        46h
kube-sriov-cni-ds-amd64          4         4         4       4            4           beta.kubernetes.io/arch=amd64   47h
kube-sriov-device-plugin-amd64   4         4         4       4            4           beta.kubernetes.io/arch=amd64   47h
whereabouts                      4         4         4       4            4           beta.kubernetes.io/arch=amd64   47h

$ pf9-0102:DPDK carlos$ kubectl get ds -n node-feature-discovery
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
nfd-worker   3         3         3       3            3           <none>          66s
```

{% hint style="info" %}
**Note**: The **`kube-sriov-device-plugin-amd64`** pods will be in **CreatingContainer** or **Pending** state since the **`sriov-config-map`** hasn’t been created. The **`sriov-config-map`** is created in the SR-IOV section of this document.
{% endhint %}
