Clusters & HA

How to Migrate VMs Between Proxmox Nodes

Complete guide to live migration and offline migration of VMs and containers between Proxmox VE nodes in a cluster.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

Why Migrate VMs Between Proxmox Nodes?

Migration is one of the most important capabilities in a Proxmox cluster. Whether you're performing hardware maintenance, balancing workloads across nodes, or upgrading a server, being able to move VMs and containers between nodes without disruption is essential for any production environment.

Proxmox VE supports both online (live) migration and offline migration, each suited to different scenarios. This guide covers both methods, the requirements for each, common pitfalls, and how to handle bulk migrations efficiently.

Online (Live) Migration vs Offline Migration

Live Migration

Live migration moves a running VM from one node to another with minimal downtime, typically just a few milliseconds of pause at the final switchover. The VM continues running throughout the process. This is ideal for production workloads where downtime is unacceptable.

Offline Migration

Offline migration moves a stopped VM or container. The VM must be shut down first, then its disk and configuration are transferred to the target node. This is simpler, works with local storage, and is useful for non-critical workloads or initial setup.

Quick Comparison

Feature Live Migration Offline Migration
VM state during migration Running Stopped
Downtime Milliseconds Minutes (depends on disk size)
Shared storage required Yes (unless using local-to-local) No
Disk data transferred Only RAM state (if shared storage) Full disk contents
Local disk support Yes (with --with-local-disks) Yes

Migration Requirements

Before attempting a migration, ensure these requirements are met:

  • Both nodes must be in the same Proxmox cluster
  • The target node has sufficient resources (CPU, RAM, storage space)
  • CPU compatibility: The target node's CPU must support all features used by the VM (or use the x86-64-v2-AES CPU type for portability)
  • Network connectivity: Both nodes must be able to communicate over the migration network
  • Storage accessibility: For live migration, the VM's storage must be accessible from both source and target nodes
  • No local resources: PCI passthrough devices, local ISO mounts, or local backups will block live migration

Shared Storage vs Local Storage Migration

Migration with Shared Storage

When VMs are stored on shared storage (Ceph, NFS, iSCSI), live migration is fast because only the VM's RAM state needs to transfer. The disk data is already accessible from both nodes:

# Live migrate VM 100 to pve2 (shared storage)
qm migrate 100 pve2 --online

# The process:
# 1. RAM is copied iteratively to the target node
# 2. Final switchover pauses the VM for milliseconds
# 3. VM resumes on the target node

Migration with Local Storage

If your VM uses local storage (LVM, ZFS local, directory), you can still migrate by copying the disk data to the target node's local storage:

# Live migrate VM with local disks to pve2
qm migrate 100 pve2 --online --with-local-disks --targetstorage local-lvm

# Offline migrate (VM must be stopped first)
qm migrate 100 pve2 --targetstorage local-zfs

Local-to-local migration takes longer because the full disk must be transferred over the network. For large disks, this can take significant time.

Migrating VMs (KVM)

Live Migration via CLI

# Basic live migration
qm migrate 100 pve2 --online

# With specific target storage
qm migrate 100 pve2 --online --targetstorage ceph-pool

# With local disk transfer
qm migrate 100 pve2 --online --with-local-disks --targetstorage local-lvm

# Force migration (skip some safety checks)
qm migrate 100 pve2 --online --force

Offline Migration via CLI

# Stop the VM first
qm stop 100

# Migrate the stopped VM
qm migrate 100 pve2

# With target storage specification
qm migrate 100 pve2 --targetstorage local-zfs

# Start the VM on the new node
qm start 100

Migration via the Web UI

In the Proxmox web interface, right-click a VM and select Migrate. Choose the target node, select whether to use online migration, and optionally specify a target storage. The UI provides a progress window showing the migration status in real time.

Migrating Containers (LXC)

LXC container migration works similarly but uses the pct command:

# Online migration of a running container
pct migrate 200 pve2 --online --restart

# Offline migration (container must be stopped)
pct stop 200
pct migrate 200 pve2
pct start 200

# Migrate to different storage
pct migrate 200 pve2 --targetstorage local-zfs

Note that LXC live migration works by stopping the container on the source, transferring the data, and starting it on the target. The --restart flag automates this process, but there will be a brief period of downtime unlike KVM live migration.

Bulk Migration

When you need to evacuate an entire node for maintenance, migrating VMs one at a time is tedious. Here are approaches for bulk migration:

Using a Shell Script

#!/bin/bash
# Migrate all running VMs from this node to pve2

TARGET="pve2"

# Get list of running VM IDs on this node
VMIDS=$(qm list | awk 'NR>1 && $3=="running" {print $1}')

for VMID in $VMIDS; do
    echo "Migrating VM $VMID to $TARGET..."
    qm migrate $VMID $TARGET --online
    if [ $? -eq 0 ]; then
        echo "VM $VMID migrated successfully"
    else
        echo "ERROR: VM $VMID migration failed"
    fi
done

Migrating All Containers

#!/bin/bash
# Migrate all containers from this node to pve2

TARGET="pve2"
CTIDS=$(pct list | awk 'NR>1 {print $1}')

for CTID in $CTIDS; do
    echo "Migrating container $CTID to $TARGET..."
    pct migrate $CTID $TARGET --restart
    if [ $? -eq 0 ]; then
        echo "Container $CTID migrated successfully"
    else
        echo "ERROR: Container $CTID migration failed"
    fi
done

Using the Proxmox Bulk Migration Feature

Starting with Proxmox VE 7.x, the web interface supports bulk actions. Select multiple VMs from the node view, then use Bulk Actions > Migrate to move them all to a target node. This queues the migrations and processes them sequentially.

Troubleshooting Migration Failures

Common Error: "Can't migrate VM with local disks"

# Solution: Use --with-local-disks and specify target storage
qm migrate 100 pve2 --online --with-local-disks --targetstorage local-lvm

Common Error: "Migration not possible - VM has PCI passthrough"

VMs with PCI passthrough devices (GPU, NIC, etc.) cannot be live migrated. You must remove the passthrough device, migrate, and re-add the device on the new node:

# Remove the PCI device from the VM config
qm set 100 --delete hostpci0

# Now migrate
qm migrate 100 pve2 --online

# Re-add PCI device on the new node (device address may differ)
qm set 100 --hostpci0 0000:01:00.0

Common Error: "Migration timeout"

This occurs with large RAM VMs where the memory keeps changing faster than it can be copied. Solutions include:

  • Increase the migration bandwidth: qm migrate 100 pve2 --online --migration_network 10.10.10.0/24 --bwlimit 0
  • Use a dedicated high-speed migration network (10GbE or faster)
  • Reduce the VM's write-heavy workload temporarily during migration

Common Error: "Storage not available on target node"

# Check which storage is available on the target node
pvesm status --node pve2

# Specify an alternative target storage
qm migrate 100 pve2 --online --targetstorage ceph-pool

Migration Best Practices

  • Use shared storage whenever possible for fastest live migrations
  • Set up a dedicated migration network (10GbE) to avoid impacting VM traffic
  • Use a portable CPU type like x86-64-v2-AES to avoid CPU compatibility issues across heterogeneous nodes
  • Test migrations during low-traffic periods first to understand timing and impact
  • Monitor migrations through the task log in the Proxmox web UI or via ProxmoxR on your phone, which lets you track running tasks and confirm migrations complete successfully from wherever you are

Conclusion

VM migration is a foundational capability for maintaining a healthy Proxmox cluster. Live migration with shared storage provides near-zero downtime for production workloads, while offline migration with local storage handles scenarios where shared storage isn't available. For routine maintenance, bulk migration scripts save significant time. The key to reliable migrations is proper preparation: compatible CPU types, sufficient resources on the target node, and a dedicated migration network for large-scale operations.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required