Troubleshooting

Proxmox Live Migration Failed: Fixing the Most Common Errors

Troubleshoot Proxmox VE live migration failures including local disk errors, CPU incompatibility, network timeouts, and PCI passthrough blocking migration.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

How Live Migration Works in Proxmox

Live migration moves a running VM from one Proxmox node to another with minimal downtime. The process copies memory pages to the destination while the VM continues running, then performs a brief pause to transfer the final state. When it works, users barely notice. When it fails, you need to understand why and fix it quickly, especially during planned maintenance windows.

Error: "Can't Migrate VM with Local Disks"

This is the most common migration error. By default, Proxmox cannot live-migrate VMs that store their disks on local storage because the destination node does not have access to those disks.

# Check where the VM's disks are stored
qm config 100 | grep -E "scsi|virtio|ide|sata"

# Example output showing local storage:
# scsi0: local-lvm:vm-100-disk-0,size=32G

# Solution 1: Use --with-local-disks flag (copies disk over network)
qm migrate 100 pve2 --online --with-local-disks

# This works but can be slow for large disks
# Monitor progress in the task log

# Solution 2: Move the disk to shared storage first
qm move-disk 100 scsi0 shared-nfs

# Then migrate normally
qm migrate 100 pve2 --online

# Solution 3: Use shared storage from the start
# When creating VMs, store disks on NFS, iSCSI, or Ceph
# to enable seamless live migration
Migration with local disks transfers data over the cluster network. For a 100 GB disk on a 1 Gbps link, expect roughly 15-20 minutes of transfer time. Plan accordingly.

Error: CPU Model Incompatible

If your cluster nodes have different CPU generations, a VM pinned to a specific CPU model on the source may not be able to run on the destination.

# Check current CPU model setting
qm config 100 | grep cpu

# Common problematic settings:
# cpu: host     <- exposes full host CPU, not portable
# cpu: Skylake-Server  <- requires Skylake or newer on destination

# Solution 1: Use a generic CPU type
qm set 100 --cpu x86-64-v2-AES
# or for broader compatibility:
qm set 100 --cpu kvm64

# Solution 2: Check what CPU types the destination supports
# On the destination node:
cat /proc/cpuinfo | grep "model name"

# Solution 3: Use CPU type that matches lowest common denominator
# For mixed Intel generations, pick the oldest generation's type

# List available CPU types
qm cpu-types

# If you must use 'host' type for performance, ensure ALL
# nodes in the cluster have the same CPU generation

Error: Migration Timeout

Migration can time out when the VM's memory is being dirtied faster than it can be transferred to the destination, or when the network between nodes is too slow.

# Check migration network speed between nodes
iperf3 -s          # On destination node
iperf3 -c pve2     # On source node

# For a 1 Gbps link, maximum throughput is ~110 MB/s
# A VM writing memory at 120 MB/s will never converge

# Solution 1: Increase migration bandwidth limit
qm migrate 100 pve2 --online --migration_network 10.10.10.0/24

# Solution 2: Set a dedicated high-speed migration network
# In /etc/pve/datacenter.cfg:
# migration: network=10.10.10.0/24,type=secure

# Solution 3: Increase the migration downtime tolerance
# This allows a longer pause at the final switchover
qm migrate 100 pve2 --online --targetstorage local-lvm

# Solution 4: Temporarily reduce VM workload during migration
# If possible, stop heavy write operations in the guest

# Solution 5: Use a 10 Gbps network for migration traffic
# This is the recommended approach for production clusters

Error: Network Too Slow

Related to timeouts, but specifically when the migration network bandwidth is insufficient for the VM's memory size and dirty rate.

# Monitor migration progress in real time
# In the web UI, watch the task viewer
# Or check the task log:
tail -f /var/log/pve/tasks/active

# Key metrics to watch:
# - "transferred" vs "remaining" memory
# - "dirty" rate (how fast memory is being modified)
# - "speed" (actual transfer rate)

# If dirty rate > transfer speed, migration will never complete

# Calculate minimum required bandwidth:
# VM RAM: 32 GB
# Dirty rate: 50 MB/s
# Minimum bandwidth needed: 50 MB/s + overhead ≈ 60 MB/s
# This requires at least a 1 Gbps dedicated link

Error: PCI Passthrough Blocks Migration

VMs with PCI or GPU passthrough devices cannot be live-migrated because the physical hardware is tied to the source node.

# Check for PCI passthrough devices
qm config 100 | grep hostpci

# Example:
# hostpci0: 0000:01:00.0

# There is no live migration with passthrough devices
# Your options are:

# Option 1: Stop the VM, migrate offline, restart
qm shutdown 100
qm migrate 100 pve2
qm start 100

# Option 2: Use mediated devices (vGPU) if your hardware supports it
# Some NVIDIA GPUs support vGPU which allows migration

# Option 3: Remove passthrough, migrate, re-add on destination
qm set 100 --delete hostpci0
qm migrate 100 pve2 --online
# Then re-add the equivalent device on the destination node
qm set 100 --hostpci0 0000:01:00.0

# Option 4: For USB passthrough, use SPICE USB redirection instead
# This is not tied to physical hardware and allows migration

Other Migration Blockers

Several other conditions can prevent live migration. If you use ProxmoxR to manage your environment, checking VM configurations for migration compatibility across nodes can save time before maintenance windows.

# Local ISO/CD-ROM mounted
qm config 100 | grep ide2
# Fix: unmount the CD
qm set 100 --ide2 none,media=cdrom

# VM has a snapshot with RAM state (savestate)
qm listsnapshot 100
# Fix: delete the snapshot or remove the RAM state

# Insufficient resources on destination
# Check available RAM on destination
pvesh get /nodes/pve2/status | grep memory

# Target storage does not exist on destination
pvesm status
# Ensure the same storage IDs are available on both nodes

# Different machine type between nodes
qm config 100 | grep machine
# Ensure both nodes run the same Proxmox version

Migration Troubleshooting Checklist

  • Verify VMs use shared storage or use --with-local-disks
  • Set CPU type to a portable model unless host-specific performance is required
  • Use a dedicated high-bandwidth network for migration traffic
  • Remove PCI passthrough devices before attempting live migration
  • Unmount local ISOs and CD-ROMs
  • Ensure both nodes have the same Proxmox version and storage configuration
  • Check destination node has sufficient free RAM and CPU capacity

Most migration failures come down to storage locality, CPU compatibility, or network bandwidth. Address these three areas and the vast majority of your migration issues will disappear.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required