How to Safely Remove a Node from a Proxmox Cluster
Step-by-step guide to removing a node from a Proxmox VE cluster. Covers VM migration, pvecm delnode, corosync cleanup, and expected votes adjustment.
When to Remove a Node from Your Cluster
There are several reasons you might need to remove a node from a Proxmox VE cluster: hardware retirement, downsizing your infrastructure, replacing a failed server, or restructuring your cluster layout. Whatever the reason, the process must be done carefully to avoid disrupting running workloads or breaking cluster quorum.
This guide walks you through every step, from migrating virtual machines off the node to cleaning up leftover configuration files. If you manage multiple Proxmox environments, tools like ProxmoxR can help you keep track of which nodes are active across your clusters.
Step 1: Migrate All VMs and Containers Off the Node
Before removing a node, every VM and container must be moved to another node. You can do this through the web UI or the command line.
# List all VMs on the node you want to remove
qm list
# List all containers
pct list
# Live-migrate a running VM to another node
qm migrate 100 pve2 --online
# Migrate a container (must be stopped for non-shared storage)
pct migrate 200 pve2
# For local storage, you may need to stop the VM first
qm shutdown 101
qm migrate 101 pve2
Verify that no VMs or containers remain on the target node before proceeding. If any VMs use local storage that cannot be migrated online, you will need to shut them down first.
Step 2: Check HA Configuration
If you use Proxmox High Availability, remove any HA resources assigned to the node being decommissioned.
# List HA resources
ha-manager status
# Remove HA resource for a specific VM
ha-manager remove vm:100
# Alternatively, edit the HA group to exclude the node
ha-manager groupset mygroup --nodes pve1,pve3
Step 3: Remove the Node from the Cluster
Once all workloads are migrated, you can remove the node. This must be done from a remaining node in the cluster, not from the node being removed.
# On a remaining node, check current cluster status
pvecm status
# Remove the node (use the exact hostname)
pvecm delnode pve4
# Verify the node is gone
pvecm nodes
If the node is still online and reachable, it is a good idea to stop the cluster services on it first:
# On the node being removed, stop cluster services
systemctl stop pve-cluster
systemctl stop corosync
Step 4: Clean Up /etc/pve/nodes
After running pvecm delnode, Proxmox may leave behind the node's directory in the cluster filesystem. You need to remove this manually.
# Check if the old node directory still exists
ls /etc/pve/nodes/
# Remove the leftover directory
rm -rf /etc/pve/nodes/pve4
# Verify it is gone
ls /etc/pve/nodes/
Only remove the directory after confirming all VMs and containers have been migrated. The directory may contain configuration files for workloads that still reference the old node.
Step 5: Verify and Adjust Corosync Configuration
After node removal, check your corosync configuration to ensure the removed node is no longer listed and that expected votes are correct.
# View current corosync config
cat /etc/pve/corosync.conf
# The node section for pve4 should be gone
# Verify expected_votes matches your remaining node count
# Example corosync.conf quorum section for 3 remaining nodes
# quorum {
# provider: corosync_votequorum
# expected_votes: 3
# }
If the expected votes value is wrong, you can edit the corosync configuration. Be careful when editing this file directly.
# Edit corosync config (increment config_version when editing)
nano /etc/pve/corosync.conf
# After editing, verify cluster health
pvecm status
pvecm expected 3
Step 6: Clean Up the Removed Node
If you plan to reuse the removed node as a standalone Proxmox server or join it to a different cluster, you need to reset it.
# On the removed node, stop services
systemctl stop pve-cluster
systemctl stop corosync
# Remove cluster configuration
pmxcfs -l
# Delete cluster files
rm /etc/pve/corosync.conf
rm /etc/corosync/*
rm -rf /etc/pve/nodes/*
rm /var/lib/corosync/*
# Restart services
systemctl start pve-cluster
# The node is now standalone again
Troubleshooting Common Issues
If pvecm delnode fails because the node is unreachable, you may see quorum issues. In that case, temporarily adjust the expected votes:
# Temporarily set expected votes to maintain quorum
pvecm expected 2
# Now retry delnode
pvecm delnode pve4
If you see errors about SSH keys or known hosts, remove the stale entries:
# Remove old SSH known host entry
ssh-keygen -R pve4
ssh-keygen -R 192.168.1.14
Verifying a Clean Removal
After completing all steps, run these checks to confirm the node has been fully removed:
# Check cluster membership
pvecm nodes
# Check cluster status
pvecm status
# Verify no leftover node directories
ls /etc/pve/nodes/
# Confirm quorum is healthy
pvecm expected
A clean node removal ensures your cluster remains healthy and stable. Taking the time to properly migrate workloads, clean up configuration files, and verify quorum will prevent unexpected issues down the line.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.