# Enable Platform9 DHCP

{% hint style="info" %}
**Recommended for KubeVirt Installations**

Platform9 DHCP is the recommended DHCP (vs whereabouts ) for KubeVirt installations. When leveraging whereabouts there is an issue that occurs during Live Migrations where the Virtual Machine IP address will change when migrated to the target host.
{% endhint %}

### What is Platform9 DHCP <a href="#what-is-platform9-dhcp" id="what-is-platform9-dhcp"></a>

Platform9 created an alternative to whereabouts for the KubeVirt use case by having a DHCP server running inside the pod/vm to cater to the DHCP requests from the virtual machine instance (not the pod in the case of Kubevirt). Multus Network Attachment Definition will use the DHCP server, and there is no need to specify the IPAM plugin. The client/consumer VM will need the dhclient for sending DHCP requests.

Prerequisites

* Kubevirt
* Advanced Networking Operator (Luigi)
* Kubemacpool

### Enabling the Platform9 DHCP Addon: <a href="#enabling-the-platform9-dhcp-addon" id="enabling-the-platform9-dhcp-addon"></a>

* Using the Advanced Networking Operator (Luigi) add-on, enable *dhcpController* addon by creating a Network Plugin resource as shown:

```
apiVersion: plumber.k8s.pf9.io/v1
kind: NetworkPlugins
metadata:
  name: networkplugins-sample-nosriov
spec:
  plugins:
    ...
    dhcpController: {}
```

* This creates the `dhcp-controller-system` namespace with the controller for the dnsmasq. This also installs Kubemacpool in `dhcp-controller-system` namespace.
* Create `HostNetoworkTemplate` & `NetworkAttachmentDefinition` .

```
apiVersion: plumber.k8s.pf9.io/v1
kind: HostNetworkTemplate
metadata:
  name: ovs-br03-config
spec:
  # Add fields here
  ovsConfig:
    - bridgeName: "ovs-br03"
      nodeInterface: "ens5"
```

```
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ovs-dnsmasq-test-woipam
  annotations:
    k8s.v1.cni.cncf.io/resourceName: ovs-cni.network.kubevirt.io/ovs-br03
spec:
  config: '{
      "cniVersion": "0.3.1",
      "type": "ovs",
      "name": "ovs-dnsmasq-test-woipam",
      "bridge": "ovs-br03"
    }'
```

* `NetworkAttachmentDefinition` used with Platform9 IPAM omits the ipam section in the config, which is present when whereabouts is used as IPAM. Platform9 IPAM also supports OVS-DPDK networks.
* Create a DHCP Server

```
apiVersion: dhcp.plumber.k8s.pf9.io/v1alpha1 
kind: DHCPServer
metadata:
  name: dhcpserver-sample
spec:
  networks:
    - networkName: ovs-dnsmasq-test-woipam
      interfaceIp: 192.168.15.14/24
      leaseDuration: 10m
      vlanId: vlan1
      cidr:
        range: 192.168.15.0/24
        range_start: 192.168.15.30
        range_end: 192.168.15.100
        gateway: 192.168.15.1
```

* About the fields
* **Name**: Name of the DHCPServer. Configurations of dnsmasq will be generated in a Configmap with the same name
* **networks**: list of all networks that this pod will serve
  * networkName: Name of NetworkAttachmentDefinition. Should not have dhcp plugin enabled.
  * interfaceIp: IP address that the pod will be allocated. Must have prefix to ensure proper routes are added.
  * leaseDuration: Duration the leases offered should be valid for. Provide in valid formats for dnsmasq (eg: 10m, 5h, etc). Defaults to 1h
  * vlanId: Dnsmasq network identifier. Used as an identifier while restoring IPs.
  * cidr:`range_start, range_end, gateway` are optional. range is compulsory. If range start and end are provided, they will be used in place of the default start and end.
* A configmap is generated based on the DHCPServer. This is a conf file for dnsmasq. It can be overridden by creating a valid configmap with the same name as that of the DHCPServer.

{% hint style="info" %}
For any specific configurations, you can provide your own configmap. Create a configmap with valid dnsmasq.conf parameters. Along with this, dhcp-range **must** be in one of these two formats

1. dhcp-range=\<start\_IP>,\<end\_ip>,\<netmask>,\<leasetime>
2. dhcp-range=\<vlanID>,\<start\_ip>,\<end\_ip>,\<netmask>,\<leasetime>
   {% endhint %}

* Sample VM yaml to apply

```
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: test-ovs-interface-1
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: testvm
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
          - bridge: {}
            name: ovs-br03
        resources:
          requests:
            memory: 1024M
      hostname: myhostname1
      networks:
      - name: default
        pod: {}
      - name: ovs-br03
        multus:
          networkName: ovs-dnsmasq-test-woipam
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/fedora-cloud-container-disk-demo:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
```

* Sample StatefulSet YAML to apply.

```
---
apiVersion: v1
kind: Service
metadata:
  name: headless
spec:
  ports:
  - port: 80
    name: web
  selector:
    app: test
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: test2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test2
  serviceName: headless
  template:
    metadata:
      labels:
        app: test2
      annotations:
        k8s.v1.cni.cncf.io/networks: '[
       { "name" : "ovs-dnsmasq-test-woipam"}
      ]'
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
      containers:
      - name: test2
        image: alpine
        ports:
        - containerPort: 80
          name: web
        command: ["/bin/sh"]
        args:
          - -c
          - >-
              udhcpc -i net1
              tail -f /dev/null
        securityContext:
          runAsUser: 0
          #allowPrivilegeEscalation: false
          capabilities:
            add: ["NET_ADMIN"]
```

* An IPAllocation is made for every lease stored in the server. It is used to restore leases back to the DHCPServer. Leases are only restored to the vlanID mentioned. Lease will expire at `leaseExpiry`. Sample of how an IPAllocation looks

```
apiVersion: dhcp.plumber.k8s.pf9.io/v1alpha1
kind: IPAllocation
metadata:
  creationTimestamp: "2022-11-09T12:18:58Z"
  generation: 1
  name: 192.168.15.90
  namespace: default
  resourceVersion: "189858"
  uid: 70ee31e1-d3b4-47f0-be92-0eb03cd33d57
spec:
  entityRef: test-ovs-interface-1
  leaseExpiry: "1667998138"
  macAddr: 1e:8d:f0:c4:6c:8e
  vlanId: vlan3
```
