Clusters & HA

Proxmox Cluster Quorum Explained: Votes, QDevice, and Two-Node Setups

Understand how Proxmox VE cluster quorum works. Learn about vote calculation, expected_votes, QDevice setup with corosync-qdevice, and two-node cluster considerations.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

What Is Quorum and Why Does It Matter?

Quorum is the minimum number of votes a Proxmox VE cluster needs to function. It prevents split-brain scenarios where two halves of a broken cluster both think they are in charge, potentially causing data corruption. Without quorum, Proxmox will refuse to perform operations like starting VMs, modifying configuration, or writing to the cluster filesystem.

Understanding quorum is essential for anyone running a multi-node Proxmox environment. If you manage several clusters, a monitoring dashboard like ProxmoxR can alert you when quorum is at risk before it becomes a problem.

How Votes Are Calculated

Each node in a Proxmox cluster has one vote by default. Quorum is achieved when more than half of the expected votes are present. The formula is straightforward:

# Quorum formula
quorum = floor(expected_votes / 2) + 1

# Examples:
# 3 nodes: quorum = floor(3/2) + 1 = 2 (can lose 1 node)
# 4 nodes: quorum = floor(4/2) + 1 = 3 (can lose 1 node)
# 5 nodes: quorum = floor(5/2) + 1 = 3 (can lose 2 nodes)
# 7 nodes: quorum = floor(7/2) + 1 = 4 (can lose 3 nodes)

This is why odd-numbered clusters are preferred. A 3-node cluster tolerates 1 failure, just like a 4-node cluster, so the fourth node adds cost without improving fault tolerance.

Checking Quorum Status

Use pvecm status to inspect your cluster's current quorum state.

# Check cluster status
pvecm status

# Key output fields to watch:
# Quorate:          Yes
# Expected votes:   3
# Highest expected: 3
# Total votes:      3
# Quorum:           2

# List individual node votes
pvecm nodes

# Example output:
# Nodeid  Votes  Name
#      1      1  pve1
#      2      1  pve2
#      3      1  pve3

If "Quorate" shows "No," the cluster is in a degraded state and will not allow most operations until quorum is restored.

The expected_votes Setting

The expected_votes value in corosync.conf tells the cluster how many total votes to expect. This is normally set automatically when nodes join or leave the cluster, but there are situations where you may need to adjust it manually.

# View the quorum section of corosync.conf
grep -A 5 "quorum {" /etc/pve/corosync.conf

# Example output:
# quorum {
#     provider: corosync_votequorum
#     expected_votes: 3
# }

# Temporarily override expected votes (does not survive reboot)
pvecm expected 2

# To permanently change, edit corosync.conf and increment config_version
Be very careful with pvecm expected. Setting it too low can allow a minority partition to become quorate, defeating the purpose of quorum protection. Only use this command when you understand the consequences.

Two-Node Cluster Quorum Problem

A two-node cluster has a fundamental quorum problem. With 2 expected votes, quorum requires 2 votes, meaning neither node can survive the loss of the other. If one node goes down, the remaining node loses quorum and stops functioning.

# Two-node cluster without QDevice
# expected_votes: 2
# quorum = floor(2/2) + 1 = 2
# Result: losing either node breaks the cluster

# Proxmox offers a workaround for 2-node clusters
# In corosync.conf:
# quorum {
#     provider: corosync_votequorum
#     expected_votes: 2
#     two_node: 1
# }
# This enables the "last man standing" behavior

However, the two_node: 1 flag has a caveat: both nodes will claim quorum if they lose contact with each other, which can lead to split-brain. The better solution is a QDevice.

Setting Up a QDevice (corosync-qdevice)

A QDevice is a lightweight service running on a third machine (it does not need to be a Proxmox node) that provides an additional vote to break ties. This is the recommended approach for two-node clusters and any even-numbered cluster.

# On the QDevice host (a Debian/Ubuntu machine, NOT a cluster node)
apt install corosync-qnetd

# On EACH Proxmox cluster node
apt install corosync-qdevice

# From one cluster node, set up the QDevice
pvecm qdevice setup 192.168.1.50

# Verify QDevice status
pvecm qdevice status

# Check that votes now include the QDevice
pvecm status
# Expected votes should now be 3 (2 nodes + 1 QDevice)
# Quorum: 2

The QDevice host needs minimal resources. A small VM or container with 512 MB of RAM is sufficient. It should be on a reliable network segment but ideally on separate infrastructure from your cluster nodes to provide true independence.

Removing a QDevice

If you add a third node to your cluster, you may want to remove the QDevice since odd-numbered clusters handle quorum naturally.

# Remove QDevice from the cluster
pvecm qdevice remove

# Verify it is removed
pvecm status

Monitoring Quorum Health

Regularly checking quorum status should be part of your cluster maintenance routine. Set up monitoring to alert you when expected votes change or when quorum is at risk.

# Quick quorum health check script
#!/bin/bash
QUORATE=$(pvecm status 2>/dev/null | grep "Quorate:" | awk '{print $2}')
if [ "$QUORATE" != "Yes" ]; then
    echo "WARNING: Cluster is NOT quorate!"
    pvecm status
    exit 1
fi
echo "Cluster quorum is healthy"

Quorum is the foundation of cluster reliability. Taking the time to understand how votes work, configuring QDevices for even-numbered clusters, and monitoring quorum status will help you avoid the most common cluster outage scenarios.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required