SSD and NVMe Optimization for Proxmox VE
Optimize SSD and NVMe performance in Proxmox VE with TRIM configuration, I/O scheduler tuning, over-provisioning, and drive health monitoring.
Why SSDs Need Tuning in Proxmox
Proxmox VE runs well on SSDs out of the box, but the default configuration leaves performance and longevity on the table. Without proper TRIM support, your SSD's write performance degrades over time as the controller loses track of which blocks are free. Without the right I/O scheduler, you may be adding unnecessary latency. And without wear monitoring, a failing drive can take your VMs down without warning.
This guide covers the essential optimizations that keep your SSDs running fast and lasting longer in a Proxmox environment.
Enabling TRIM and Discard
TRIM tells the SSD which blocks are no longer in use so the controller can erase them proactively. Without TRIM, the drive must erase old blocks at write time, causing write amplification and performance drops. There are two approaches to TRIM in Proxmox: continuous discard and periodic TRIM via fstrim.
Option 1: Periodic TRIM with fstrim (Recommended)
Periodic TRIM is the safer and more performant approach. Instead of issuing discard commands on every delete operation, you run a batch TRIM on a schedule:
# Enable the fstrim timer (runs weekly by default)
systemctl enable fstrim.timer
systemctl start fstrim.timer
# Verify it is active
systemctl status fstrim.timer
# Run a manual trim
fstrim -av
The -a flag trims all mounted filesystems that support it, and -v shows how much space was trimmed. You will typically see output like:
/: 12.4 GiB (13314818048 bytes) trimmed on /dev/sda2
/var/lib/vz: 45.2 GiB (48544210944 bytes) trimmed on /dev/nvme0n1p1
Option 2: Continuous Discard
If you prefer real-time TRIM, add the discard mount option to your filesystems in /etc/fstab:
/dev/nvme0n1p1 /var/lib/vz ext4 defaults,discard 0 2
For ZFS, enable autotrim on the pool:
zpool set autotrim=on rpool
Passing TRIM to VM Disks
For VMs to send TRIM commands to the underlying storage, you need to enable discard on the virtual disk. In the Proxmox web UI, edit the VM's hard disk and check the "Discard" option. On the command line:
qm set 100 -scsi0 local-lvm:vm-100-disk-0,discard=on,ssd=1
The ssd=1 flag tells the guest OS that the underlying storage is an SSD, which enables the guest to issue TRIM commands and adjust its own I/O behavior accordingly.
I/O Scheduler Tuning
The Linux I/O scheduler reorders and merges disk requests to improve performance. However, SSDs and NVMe drives do not benefit from the same schedulers as spinning disks. The optimal scheduler depends on your drive type:
| Drive Type | Recommended Scheduler | Why |
|---|---|---|
| NVMe | none | NVMe drives have deep internal queues; any host-side scheduling adds latency without benefit |
| SATA/SAS SSD | mq-deadline | Provides fairness guarantees with minimal overhead for single-queue devices |
| HDD | mq-deadline or bfq | Seek optimization still matters for spinning platters |
Check your current scheduler:
# For NVMe
cat /sys/block/nvme0n1/queue/scheduler
# For SATA
cat /sys/block/sda/queue/scheduler
The active scheduler is shown in brackets, e.g., [none] mq-deadline kyber bfq. To change it temporarily:
echo "none" > /sys/block/nvme0n1/queue/scheduler
To make the change persistent across reboots, create a udev rule:
# /etc/udev/rules.d/60-io-scheduler.rules
# Set NVMe drives to none
ACTION=="add|change", KERNEL=="nvme[0-9]*", ATTR{queue/scheduler}="none"
# Set SATA SSDs to mq-deadline
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"
Over-Provisioning
Over-provisioning reserves a portion of the SSD's capacity for the controller's internal garbage collection and wear leveling. Enterprise SSDs typically come with 10-28% over-provisioning from the factory, but consumer drives often have minimal reserves.
You can manually over-provision by leaving unpartitioned space on the drive. A common approach for consumer SSDs in Proxmox:
# Leave ~10% of a 1TB drive unpartitioned
# If the drive is 953 GiB usable, only partition ~860 GiB
# The remaining space stays unallocated for the controller to use
Alternatively, many drives support setting over-provisioning via hdparm:
# Check current capacity
hdparm -N /dev/sda
# Set max addressable sectors (reduces usable capacity)
hdparm -Np<sector_count> /dev/sda
For NVMe drives, the nvme-cli tool provides namespace management for the same purpose, though most users find that leaving unpartitioned space is simpler and equally effective.
Wear Monitoring
SSDs have a finite number of write cycles. Monitoring wear helps you replace drives before they fail. Install smartmontools if not already present:
apt install smartmontools
Check SSD health for SATA drives:
smartctl -a /dev/sda | grep -E "(Wear_Leveling|Media_Wearout|Percentage_Used|Power_On_Hours)"
For NVMe drives:
smartctl -a /dev/nvme0n1 | grep -E "(Percentage Used|Data Units Written|Power On Hours)"
# Or use nvme-cli directly
nvme smart-log /dev/nvme0n1
Key metrics to watch:
- Percentage Used: How much of the drive's rated write endurance has been consumed. Replace before it reaches 100%.
- Data Units Written: Total data written over the drive's lifetime. Compare against the manufacturer's TBW (Terabytes Written) rating.
- Reallocated Sector Count: Non-zero values on SATA SSDs indicate the drive is remapping bad cells — an early warning sign.
Set up automatic email alerts by enabling the smartd daemon:
# /etc/smartd.conf
/dev/sda -a -o on -S on -s (S/../.././02|L/../../7/03) -m admin@example.com
/dev/nvme0n1 -a -s (S/../.././02|L/../../7/03) -m admin@example.com
For Proxmox environments with multiple nodes, monitoring drive health across every server individually can be tedious. Centralized monitoring through tools like ProxmoxR helps you keep track of storage health across your entire infrastructure without logging into each node separately.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.