How to Change the IP Address of a Proxmox VE Node
Step-by-step guide to changing the IP address on a Proxmox VE node, including network interfaces, hosts file, certificates, and cluster node updates.
When Do You Need to Change the IP?
Changing the IP address of a Proxmox VE node is sometimes necessary when restructuring your network, moving to a new subnet, or fixing an initial misconfiguration. Unlike a regular Linux server, Proxmox ties its IP to several configuration files, SSL certificates, and (in clusters) the corosync ring. Changing the IP requires updating multiple files in the correct order to avoid breaking access to the web UI or cluster communication.
Standalone Node: Change the IP
For a standalone Proxmox node (not in a cluster), the process involves three files:
Step 1: Update /etc/network/interfaces
# Edit the network configuration:
nano /etc/network/interfaces
# Find the vmbr0 bridge (or your management interface):
auto vmbr0
iface vmbr0 inet static
address 10.0.0.50/24 # Change to new IP
gateway 10.0.0.1 # Update gateway if needed
bridge-ports ens18
bridge-stp off
bridge-fd 0
Step 2: Update /etc/hosts
# Edit the hosts file:
nano /etc/hosts
# Change the old IP to the new one:
10.0.0.50 pve1.homelab.local pve1
# Make sure 127.0.0.1 is NOT mapped to your hostname
# (Proxmox requires the real IP for the hostname)
Step 3: Regenerate SSL Certificates
# Regenerate the self-signed certificate with the new IP:
pvecm updatecerts --force
# Restart the proxy service:
systemctl restart pveproxy
# If pvecm is not available (standalone node), regenerate manually:
rm /etc/pve/local/pve-ssl.pem /etc/pve/local/pve-ssl.key
pvecm updatecerts --force
Step 4: Apply Changes
# Restart networking (or reboot):
systemctl restart networking
# Or reboot for a clean state:
reboot
After reboot, access the web UI at https://NEW-IP:8006.
Cluster Node: Change the IP
Changing the IP of a cluster node is more involved because corosync uses fixed IP addresses for inter-node communication. The general approach:
Step 1: Update Network and Hosts Files
# On the node being changed, update interfaces and hosts:
nano /etc/network/interfaces
# Update the IP as shown above
nano /etc/hosts
# Update the IP for this node
# Example: 10.0.0.60 pve2.homelab.local pve2
Step 2: Update /etc/hosts on ALL Cluster Nodes
# On EVERY other node in the cluster, update /etc/hosts:
# Replace the old IP with the new IP for the changed node
ssh root@pve1 "sed -i 's/10.0.0.51/10.0.0.60/' /etc/hosts"
ssh root@pve3 "sed -i 's/10.0.0.51/10.0.0.60/' /etc/hosts"
Step 3: Update Corosync Configuration
# Edit the corosync config on one node:
nano /etc/pve/corosync.conf
# Find the node entry and update the ring address:
node {
name: pve2
nodeid: 2
ring0_addr: 10.0.0.60 # New IP
quorum_votes: 1
}
# IMPORTANT: Increment the config_version number:
config_version: 3 # Increase by 1
Step 4: Apply and Restart
# On the changed node, restart networking:
systemctl restart networking
# Regenerate certificates:
pvecm updatecerts --force
# Restart corosync and pve services:
systemctl restart corosync
systemctl restart pvedaemon
systemctl restart pveproxy
# Verify cluster status from any node:
pvecm status
pvecm nodes
Verify the Change
# Check the node is accessible:
ping 10.0.0.60
# Verify web UI works:
curl -k https://10.0.0.60:8006
# Check cluster health (if applicable):
pvecm status
# Quorum should show all nodes as expected
# Verify the certificate matches the new IP:
openssl s_client -connect 10.0.0.60:8006 < /dev/null 2>/dev/null | openssl x509 -noout -text | grep -A1 "Subject Alternative"
Common Issues
- Web UI unreachable: Check that /etc/hosts has the correct mapping and that pveproxy was restarted.
- Cluster communication broken: Verify corosync.conf has the new IP on all nodes and the config_version was incremented.
- Certificate warnings: Run
pvecm updatecerts --forceto regenerate certificates for the new IP. - VM migration fails: Migration uses the IP in /etc/hosts. Ensure all nodes have the updated hosts file.
During an IP change, you may temporarily lose web UI access while services restart. ProxmoxR can connect to your Proxmox node using the new IP address immediately after the change, letting you verify that the API is responding and your VMs are still running — without needing to remember the new URL in a browser.
Summary
Changing the IP of a Proxmox VE node requires updating /etc/network/interfaces, /etc/hosts, and regenerating SSL certificates. For cluster nodes, you must also update /etc/hosts on all nodes and modify corosync.conf with the new address. Always plan IP changes during a maintenance window and have physical or out-of-band console access available in case something goes wrong.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.