Proxmox NFS Storage: Server Setup, Mounting, and Performance Tuning
Set up an NFS server for Proxmox VE storage, configure exports and permissions, add NFS shares in Proxmox, and tune performance for VM disk images and backups.
Why NFS for Proxmox Storage?
NFS (Network File System) is one of the simplest shared storage options for Proxmox VE. It requires no special hardware, works over standard Ethernet, and supports all Proxmox content types: VM disk images, container templates, ISO files, backups, and snippets. Because NFS provides shared storage accessible by all cluster nodes simultaneously, it enables live migration of VMs between hosts without needing to copy disk images.
NFS is particularly popular in homelabs and small-to-medium deployments where the complexity of Ceph is not warranted but shared storage is still needed.
Setting Up the NFS Server
You can run the NFS server on a dedicated machine, a NAS appliance, or even on one of your Proxmox nodes (though a dedicated server is preferred). These instructions assume a Debian or Ubuntu NFS server.
Install the NFS server packages:
apt update
apt install nfs-kernel-server -y
Create the directory to export:
mkdir -p /srv/nfs/proxmox
chown nobody:nogroup /srv/nfs/proxmox
chmod 0777 /srv/nfs/proxmox
Configure /etc/exports
Edit /etc/exports to define which networks can access the share and with what permissions:
/srv/nfs/proxmox 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
Key export options explained:
rw– read-write access (required for VM disks and backups).sync– writes are committed to disk before the server replies. Safer but slightly slower thanasync.no_subtree_check– disables subtree checking for better performance and reliability.no_root_squash– allows the Proxmox host (which connects as root) to write files with correct ownership. Without this, Proxmox cannot create VM disk images.
For multiple subnets or stricter access control:
/srv/nfs/proxmox 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash) 10.10.10.0/24(rw,sync,no_subtree_check,no_root_squash)
Apply the exports:
exportfs -rav
Enable and start the NFS server:
systemctl enable --now nfs-server
Verify the export is active:
showmount -e localhost
Adding NFS Storage in Proxmox
In the Proxmox web UI, navigate to Datacenter > Storage > Add > NFS. Fill in the fields:
- ID: a name like
nfs-shared - Server: IP or hostname of your NFS server
- Export:
/srv/nfs/proxmox - Content: select what you want to store (Disk image, ISO, Backup, Container template, Snippets)
Alternatively, add it via the command line, which writes to /etc/pve/storage.cfg:
pvesm add nfs nfs-shared \
--server 192.168.1.50 \
--export /srv/nfs/proxmox \
--content images,iso,backup,vztmpl,snippets \
--options vers=4.2
The storage immediately appears on all cluster nodes because /etc/pve/storage.cfg is a cluster-wide configuration file.
Testing the Mount
Verify the NFS share is mounted on each Proxmox node:
df -h | grep nfs
mount | grep nfs
You should see the share mounted at /mnt/pve/nfs-shared. Try creating a test file:
touch /mnt/pve/nfs-shared/test-file && echo "NFS write OK" && rm /mnt/pve/nfs-shared/test-file
Performance Tuning
Default NFS settings work fine for ISOs and backups, but for VM disk images you should tune several parameters for better IOPS and throughput.
Use NFS v4.2
NFS 4.2 supports features like server-side copy and sparse files. Specify the version in your Proxmox storage options or mount settings:
pvesm set nfs-shared --options vers=4.2
Increase NFS Server Threads
The default number of NFS server threads (typically 8) may bottleneck under heavy workloads. On the NFS server, increase it:
# Edit /etc/default/nfs-kernel-server
RPCNFSDCOUNT=32
# Restart the service
systemctl restart nfs-server
Use a Dedicated Storage Network
Separate NFS traffic from VM and management traffic using a dedicated VLAN or physical interface. This prevents storage I/O from competing with user traffic.
Enable Jumbo Frames
If your network supports it, set the MTU to 9000 on both the NFS server and Proxmox nodes for the storage network interfaces:
# On both server and client interfaces
ip link set ens19 mtu 9000
# Make permanent in /etc/network/interfaces
auto ens19
iface ens19 inet static
address 10.10.10.10/24
mtu 9000
Async vs Sync Exports
If you have a UPS protecting the NFS server, using async instead of sync in your exports can significantly improve write performance. The trade-off is a small risk of data loss if the server crashes mid-write.
NFS remains a reliable and straightforward shared storage option for Proxmox clusters. For administrators managing NFS-backed VMs across multiple nodes, ProxmoxR provides a convenient way to monitor storage usage and VM status without needing to log into each node individually.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.