Proxmox Network Configuration: Complete Guide
Master Proxmox VE networking. Covers bridges, bonds, VLANs, Open vSwitch, and troubleshooting common network issues.
Understanding Proxmox VE Networking
Networking is the backbone of any Proxmox VE deployment. Whether you are running a simple homelab with a few VMs or a production cluster with dozens of virtual machines and containers, getting your network configuration right is essential. Proxmox VE uses standard Linux networking under the hood, which means you have full control over bridges, bonds, VLANs, and routing directly from /etc/network/interfaces or the web GUI.
This guide walks through every major networking concept in Proxmox VE, from basic Linux bridges to advanced bonding and NAT configurations, with practical examples you can apply immediately.
Linux Bridges in Proxmox
A Linux bridge is the default networking model in Proxmox VE. When you install Proxmox, the installer automatically creates a bridge called vmbr0 and attaches your physical NIC to it. This bridge acts like a virtual switch: every VM or container connected to vmbr0 can communicate with each other and with the physical network.
Default Bridge Configuration
After a fresh Proxmox installation, your /etc/network/interfaces file will look something like this:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
Key settings to understand:
- bridge-ports - The physical interface attached to the bridge. Without this, the bridge is internal-only.
- bridge-stp off - Spanning Tree Protocol is disabled by default. Enable it only if you have redundant bridge paths to prevent loops.
- bridge-fd 0 - Forward delay set to zero for faster link-up times.
Creating Additional Bridges
You can create multiple bridges to isolate network segments. For example, a second bridge for an internal-only network:
auto vmbr1
iface vmbr1 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
VMs connected to vmbr1 can talk to each other but have no direct path to the physical network unless you configure routing or NAT.
Network Bonding
Bonding combines multiple physical NICs into a single logical interface for redundancy, increased throughput, or both. Proxmox supports all standard Linux bonding modes.
Common Bonding Modes
- balance-rr (mode 0) - Round-robin. Packets are transmitted in sequential order across all interfaces. Provides load balancing and fault tolerance but requires a switch that supports EtherChannel or similar.
- active-backup (mode 1) - Only one interface is active at a time. If it fails, another takes over. No special switch configuration required. This is the safest choice for most homelabs.
- 802.3ad / LACP (mode 4) - Link Aggregation Control Protocol. Requires switch support for LACP. Provides both load balancing and fault tolerance and is the standard in production environments.
Bond Configuration Example (Active-Backup)
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode active-backup
bond-primary eno1
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
Bond Configuration Example (LACP)
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
The bond-xmit-hash-policy layer3+4 setting distributes traffic based on source/destination IP and port, which gives the best distribution for typical workloads.
Open vSwitch (OVS)
For more advanced networking scenarios, Proxmox supports Open vSwitch as an alternative to Linux bridges. OVS provides features like per-port VLAN configuration, SPAN/mirror ports, traffic shaping, and OpenFlow support.
To use OVS, install it first:
apt update
apt install openvswitch-switch
Then create an OVS bridge in /etc/network/interfaces:
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
ovs_type OVSBridge
ovs_ports eno1
For most users, standard Linux bridges are simpler and perform well. Consider OVS only if you need its advanced features such as OpenFlow, SPAN ports, or centralized SDN management.
NAT Configuration for VMs
If you want VMs on a private network to reach the internet through the Proxmox host, you need to configure NAT (masquerading). This is common in homelabs where you have a limited number of public or routable IPs.
# Enable IP forwarding
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl -p
# Add the internal bridge
auto vmbr1
iface vmbr1 inet static
address 10.10.10.1/24
bridge-ports none
bridge-stp off
bridge-fd 0
post-up iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s 10.10.10.0/24 -o vmbr0 -j MASQUERADE
VMs on vmbr1 should use 10.10.10.1 as their gateway. They will be able to access the internet, but external hosts cannot reach them directly unless you add port forwarding rules.
Applying Network Changes
After editing /etc/network/interfaces, apply changes with:
# Reload a specific interface
ifreload -a
# Or restart networking entirely (brief downtime)
systemctl restart networking
You can also apply pending network changes from the Proxmox web GUI by navigating to the node's Network panel and clicking Apply Configuration. If you manage your Proxmox nodes remotely using a tool like ProxmoxR, you can review network status from your phone to verify changes took effect without needing to be at a desktop.
Troubleshooting Common Network Issues
VMs Cannot Reach the Network
- Verify the VM is connected to the correct bridge in its hardware settings.
- Check that the bridge has a physical port attached:
brctl show vmbr0. - Confirm the VM has a valid IP address and gateway configured inside the guest OS.
- Look for firewall rules blocking traffic:
iptables -L -n.
Bonding Not Working
- Ensure all slave interfaces are up:
ip link show eno1. - For LACP, confirm your switch port is configured for 802.3ad.
- Check bond status:
cat /proc/net/bonding/bond0.
Slow Network Performance
- Use
virtionetwork drivers in your VMs for the best performance. The defaulte1000orrtl8139models are significantly slower. - Check for packet errors:
ip -s link show vmbr0. - Verify MTU settings are consistent across bridge, physical NIC, and switch.
Summary
Proxmox VE gives you full control over Linux networking primitives. For most setups, a single Linux bridge with a physical NIC is all you need. As your environment grows, bonding adds redundancy, VLANs provide segmentation, and NAT enables isolated private networks. Take time to plan your network topology before deploying VMs, and always test configuration changes before applying them to production nodes.
Take Proxmox management mobile
All the features discussed in this guide — accessible from your phone with ProxmoxR. Real-time monitoring, power control, firewall management, and more.