Beyond GPU: NIC, HBA, NVMe, and USB Controller Passthrough in Proxmox
Guide to PCI passthrough in Proxmox VE beyond GPUs, covering NIC passthrough with SR-IOV, HBA passthrough for TrueNAS, NVMe passthrough, and USB controller passthrough.
PCI Passthrough Is Not Just for GPUs
While GPU passthrough gets the most attention, Proxmox PCI passthrough works with almost any PCI device. Passing through network cards, HBA controllers, NVMe drives, and USB controllers gives VMs direct hardware access with near-native performance. This is essential for workloads like TrueNAS (which needs direct disk access), high-performance networking, and VMs that require specific USB devices permanently attached.
NIC Passthrough with SR-IOV
SR-IOV (Single Root I/O Virtualization) lets a single physical NIC present multiple virtual functions (VFs) that can each be passed through to different VMs. This gives each VM direct hardware network access without sharing a virtual bridge:
# Check if your NIC supports SR-IOV:
lspci -v -s 03:00.0 | grep -i "sr-iov"
# Enable SR-IOV VFs (e.g., 4 virtual functions on Intel X710):
echo 4 > /sys/class/net/ens1f0/device/sriov_numvfs
# Make it persistent across reboots:
echo "echo 4 > /sys/class/net/ens1f0/device/sriov_numvfs" >> /etc/rc.local
chmod +x /etc/rc.local
# List the new virtual functions:
lspci | grep "Virtual Function"
# Example output:
# 03:02.0 Ethernet controller: Intel Corporation Ethernet Virtual Function
# 03:02.1 Ethernet controller: Intel Corporation Ethernet Virtual Function
# Pass a VF through to a VM:
qm set 100 --hostpci0 03:02.0,pcie=1
SR-IOV is significantly more efficient than whole-NIC passthrough because you can share one physical port among multiple VMs while each gets dedicated hardware queues and near-native throughput.
HBA Passthrough for TrueNAS
TrueNAS (and ZFS in general) works best with direct access to physical disks. Passing through an HBA (Host Bus Adapter) controller gives the TrueNAS VM full control over the attached drives, including SMART data, disk identification, and proper ZFS resilver behavior:
# Find your HBA controller:
lspci | grep -i "sas\|lsi\|hba\|raid"
# Example: 04:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008
# Check its IOMMU group:
find /sys/kernel/iommu_groups/ -type l | grep "04:00.0"
# Bind to vfio-pci (get the device ID first):
lspci -nn -s 04:00.0
# Example: 04:00.0 ... [1000:00c9]
echo "options vfio-pci ids=1000:00c9" >> /etc/modprobe.d/vfio.conf
update-initramfs -u -k all
reboot
# Add to VM config:
qm set 100 --hostpci0 04:00.0,pcie=1
Make sure the HBA is in IT mode (not RAID mode) for TrueNAS. LSI controllers like the 9207-8i and 9300-8i are popular choices that work well with passthrough.
NVMe Drive Passthrough
Passing through an entire NVMe drive gives a VM direct access to the SSD with native NVMe performance, bypassing the virtio-scsi layer entirely. This is ideal for database servers or workloads that need maximum IOPS:
# Find the NVMe controller:
lspci | grep -i nvme
# Example: 06:00.0 Non-Volatile memory controller: Samsung Electronics
# Check IOMMU group:
/usr/local/bin/iommu-groups.sh | grep -A1 "06:00.0"
# Bind to vfio-pci:
lspci -nn -s 06:00.0
# Example: [144d:a808]
echo "options vfio-pci ids=144d:a808" >> /etc/modprobe.d/vfio.conf
update-initramfs -u -k all
reboot
# Add to VM:
qm set 100 --hostpci0 06:00.0,pcie=1
Note that you cannot pass through the NVMe drive that Proxmox is installed on. If you want NVMe passthrough, you need a separate NVMe drive for the VM and a different drive (SATA SSD, another NVMe, or USB) for Proxmox itself.
USB Controller Passthrough
Rather than passing individual USB devices (which can be unreliable with hot-plug), you can pass through an entire USB controller. This gives the VM native control over all devices plugged into that controller's ports:
# Find USB controllers:
lspci | grep -i usb
# Example output:
# 00:14.0 USB controller: Intel Corporation Cannon Lake USB 3.1 xHCI
# 05:00.0 USB controller: Renesas Technology Corp. uPD720202
# The second controller (add-in card) is ideal for passthrough
# Check its IOMMU group:
/usr/local/bin/iommu-groups.sh | grep -A2 "05:00.0"
# If it is in its own group, pass it through:
qm set 100 --hostpci0 05:00.0,pcie=1
This is particularly useful for home automation VMs (Zigbee/Z-Wave USB sticks), VMs that need USB printers, or any scenario where USB device passthrough is flaky. A PCIe USB card is inexpensive and usually lands in its own IOMMU group.
General Tips for Non-GPU Passthrough
- Always check IOMMU groups first. Use the IOMMU groups script before attempting any passthrough.
- Use PCIe add-in cards for passthrough when possible. Onboard controllers often share IOMMU groups with critical chipset devices.
- Set
pcie=1in the hostpci line for modern devices that support PCIe (rather than legacy PCI). - Machine type q35 is recommended for PCIe passthrough. The older i440fx machine type only supports legacy PCI.
# Recommended VM settings for passthrough:
qm set 100 --machine q35
qm set 100 --bios ovmf # UEFI boot
qm set 100 --hostpci0 04:00.0,pcie=1
Managing multiple VMs with various passthrough configurations can be complex. ProxmoxR gives you quick mobile access to check which VMs are running and restart them when needed — helpful when a passthrough device causes a VM to hang and you need to intervene quickly.
Summary
PCI passthrough in Proxmox extends far beyond GPUs. SR-IOV NIC passthrough delivers hardware-accelerated networking to multiple VMs from a single port. HBA passthrough gives TrueNAS direct disk access for reliable ZFS operation. NVMe passthrough provides native SSD performance. USB controller passthrough gives VMs stable access to USB peripherals. The key to success with any passthrough is clean IOMMU groups, VFIO-PCI binding, and the q35 machine type.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.