# Delete Orphaned Virtual Machine Entries

When a virtual machine is deleted / migrated / evacuated, sometimes its corresponding allocation on the source hypervisor host is not deleted from the PCD database.&#x20;

## Root Cause

* If the compute service on the source hypervisor host is stopped abruptly while a VM is being deleted / migrated / evacuated, then the records for the VM in the PCD management database may not be deleted fully.
* As a result, the host-side service cannot communicate with nova-conductor on the controller, so details are not shared and nova-compute may remain under the impression it still holds the virtual machine.

## Resolution

Use the following steps to locate and remove orphaned allocations. Run the commands from a hypervisor host.

{% stepper %}
{% step %}

### 1. List stale allocations

Run nova-manage placement audit to list allocations for virtual machines that are either deleted or moved to other hypervisor hosts.

{% code title="List stale allocations (run from a pod with nova-manage)" %}

```bash
kubectl exec -it deploy/nova_api_osapi -n <NS> -- bash
nova-manage placement audit --verbose
```

{% endcode %}

Verify the output to identify suspected orphaned allocations.
{% endstep %}

{% step %}

### 2. Verify VM existence / location

Confirm whether the VM still exists or is hosted on another hypervisor host.

{% code title="List servers on source host" %}

```bash
pcdctl server list --all-projects --host <Source Host>
```

{% endcode %}

{% code title="Show instance host" %}

```bash
pcdctl server show -c OS-EXT-SRV-ATTR:host <instance-ID>
```

{% endcode %}
{% endstep %}

{% step %}

### 3. Identify the resource provider ID (if needed)

List resource providers to find the provider associated with the allocation.

{% code title="List resource providers" %}

```bash
pcdctl resource provider list
```

{% endcode %}
{% endstep %}

{% step %}

### 4. Delete a single orphaned allocation

After validating the allocation is stale and the instance is deleted or moved to another compute node, delete the orphaned allocation.

{% code title="Delete specific allocation" %}

```bash
kubectl exec -it <nova_api_osapi-pod_name> -n <NS> -- bash
nova-manage placement audit --verbose --delete <instance-ID>
```

{% endcode %}
{% endstep %}

{% step %}

### 5. Delete multiple/all orphaned allocations

If there are multiple orphaned allocations, delete them all at once.

{% code title="Delete all orphaned allocations" %}

```bash
nova-manage placement audit --verbose --delete
```

{% endcode %}
{% endstep %}

{% step %}

### 6. Heal allocations

After deletions, run heal\_allocations to ensure Placement entries are consistent.

{% code title="Heal allocations" %}

```bash
nova-manage placement heal_allocations
```

{% endcode %}
{% endstep %}

{% step %}

### 7. Validate there are no more orphaned allocations

Re-run the audit to ensure no orphaned allocations remain.

{% code title="Validate no orphaned allocations remain" %}

```bash
nova-manage placement audit --verbose
```

{% endcode %}
{% endstep %}
{% endstepper %}
