Storage

Setting Up Ceph Storage in Proxmox VE: A Complete Guide

Step-by-step instructions for deploying Ceph in Proxmox VE, including monitors, OSDs, pools, CephFS, RBD storage, and CRUSH rules for a resilient distributed storage cluster.

ProxmoxR app icon

Managing Proxmox? Try ProxmoxR

Monitor and control your VMs & containers from your phone.

Try Free

Why Ceph on Proxmox?

Ceph is a distributed storage system that provides block (RBD), file (CephFS), and object storage in a single unified cluster. Proxmox VE integrates Ceph directly, letting you deploy a hyper-converged infrastructure where compute and storage run on the same nodes. This eliminates the need for a separate SAN or NAS while providing automatic data replication, self-healing, and horizontal scaling.

A minimum Ceph cluster requires three nodes, each contributing at least one OSD (Object Storage Daemon). This allows for the default replication factor of three, meaning your data survives even if an entire node goes down.

Installing Ceph

From the Proxmox web UI, navigate to your node, then Ceph, and click Install Ceph. Alternatively, install via the command line on each node:

pveceph install --repository no-subscription

This installs the Ceph packages matched to your Proxmox VE version. Once installed, initialize Ceph on the first node:

pveceph init --network 10.10.10.0/24

The --network flag specifies the dedicated Ceph cluster network for OSD replication traffic. Using a separate network from your public/VM traffic is strongly recommended to avoid contention.

Creating Monitors and Managers

Ceph monitors (MONs) maintain the cluster map and consensus. Deploy a monitor on each of your three nodes:

# On each node
pveceph mon create

Similarly, create a manager daemon (MGR) on each node for telemetry and the Ceph dashboard:

pveceph mgr create

Verify the cluster health:

ceph status

You should see three MONs and three MGRs with the cluster in HEALTH_OK (or HEALTH_WARN if no OSDs exist yet).

Adding OSDs

Each OSD maps to a physical disk. Identify available disks on each node:

lsblk
ceph-volume lvm list

Create an OSD on a clean, unpartitioned disk:

# Simple OSD with bluestore (default)
pveceph osd create /dev/sdb

# OSD with a dedicated NVMe device for the WAL/DB (recommended for HDD+NVMe setups)
pveceph osd create /dev/sdb --db_dev /dev/nvme0n1

Repeat for each disk on each node. A typical three-node cluster with four disks per node gives you twelve OSDs. After all OSDs are created, check the cluster again:

ceph osd tree

This shows the CRUSH hierarchy: root, hosts, and OSDs.

Creating Pools

Pools are logical partitions within Ceph. Create a pool for VM disk images (RBD):

pveceph pool create vmpool --pg_autoscale_mode on --size 3 --min_size 2

Key parameters:

  • --size 3 – three replicas of each object.
  • --min_size 2 – minimum replicas required for I/O to continue (allows one node down).
  • --pg_autoscale_mode on – automatically adjusts placement group count as the cluster grows.

The pool automatically appears as an available storage target under Datacenter > Storage for RBD disk images.

Setting Up CephFS

CephFS provides a POSIX-compliant shared filesystem, useful for container bind mounts, ISO storage, or shared application data. Create the metadata and data pools, then the filesystem:

pveceph fs create --pg_num 64 --add-storage

This creates the required MDS (Metadata Server) daemons automatically. The --add-storage flag registers the CephFS mount in Proxmox storage configuration.

Verify the filesystem is active:

ceph fs status

CRUSH Rules for Data Placement

CRUSH rules control how Ceph distributes data. The default rule replicates across different hosts. For clusters with mixed disk types, create custom rules:

# Create a rule that only places data on SSD-class OSDs
ceph osd crush rule create-replicated ssd-only default host ssd

You must first tag your OSDs with the correct device class. Ceph usually detects this automatically, but you can set it manually:

ceph osd crush set-device-class ssd osd.0 osd.1 osd.2

Then assign the rule to your pool:

ceph osd pool set vmpool crush_rule ssd-only

Monitoring and Dashboard

Enable the Ceph dashboard module for a browser-based monitoring interface:

ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
ceph dashboard ac-user-create admin -i /dev/stdin administrator
# Type your password and press Ctrl+D

The dashboard is accessible on port 8443 of any MGR node. It provides real-time views of OSD status, pool usage, I/O metrics, and cluster health.

For day-to-day monitoring from the command line:

ceph -w          # watch cluster events in real time
ceph df          # pool usage summary
ceph osd perf    # OSD latency statistics

Ceph on Proxmox gives you enterprise-grade distributed storage without separate hardware. Managing Ceph health across multiple nodes becomes much easier with ProxmoxR, which lets you monitor OSD status and pool utilization alongside your VM workloads from a unified remote interface.

Take Proxmox management mobile

All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.

ProxmoxR

Manage Proxmox from your phone

Monitor, control, and manage your clusters on the go.

Free 7-day trial · No credit card required