treeru.com

Diagnosing Office Network Speed with iperf3 — 1G vs 2.5G Real Measurements

"The network feels slow" tells you nothing about the cause. iperf3 measures actual bandwidth between servers, giving you real numbers instead of guesses. We compared 1Gbps and 2.5Gbps NIC performance in our 16-server office infrastructure, and share the results along with a step-by-step bottleneck diagnosis method.

Why iperf3?

Internet speed test sites (like Speedtest) measure bandwidth to your ISP. To measure LAN speed, you need to send traffic directly between internal servers. iperf3 is the standard tool for measuring pure network bandwidth in a server-client model.

  • Pure bandwidth measurement: No disk I/O or application overhead — just network throughput.
  • Bidirectional testing: Use -R for reverse or --bidir for simultaneous bidirectional tests.
  • Segment-by-segment diagnosis: Narrow down whether the bottleneck is the NIC, cable, or switch port.

Test Environment and Method

ItemServer A (Sender)Server B (Receiver)
NIC (1G test)Intel I225-V 2.5GbE (1GbE mode)Intel I225-V 2.5GbE (1GbE mode)
NIC (2.5G test)Realtek RTL8125BG 2.5GbERealtek RTL8125BG 2.5GbE
SwitchManaged switch (2.5GbE capable)
CableCat5e (existing) / Cat6 (replaced)
OSUbuntu 24.04 LTS

iperf3 Basic Usage

# Install (Ubuntu/Debian)
sudo apt install iperf3

# Server mode (run on receiver)
iperf3 -s

# Client mode (run on sender)
iperf3 -c 10.x.x.x                # Default 10-second test
iperf3 -c 10.x.x.x -t 30          # 30-second test
iperf3 -c 10.x.x.x -R             # Reverse (server→client) test
iperf3 -c 10.x.x.x --bidir        # Simultaneous bidirectional test
iperf3 -c 10.x.x.x -P 4           # 4 parallel streams
iperf3 -c 10.x.x.x -t 30 -i 5    # 30 seconds, report every 5s

1Gbps NIC Measured Results

Theoretical maximum for 1Gbps is 1,000Mbps, but protocol overhead and environment factors limit real-world throughput to around 940Mbps.

TestDirectionCableSpeedNotes
1A → BCat6937 MbpsNear theoretical limit
2B → ACat6932 MbpsGood
3A → BCat5e (3m)925 MbpsMinimal difference at 1G
4BidirectionalCat6~470 Mbps eachFull-duplex confirmed
5A → BCat5e (20m, old)610 MbpsDegraded cable performance

Key finding: With Cat6 cables and short distances, 1GbE reaches 937Mbps — near the theoretical ceiling. At 1Gbps, the difference between Cat5e and Cat6 is minimal. However, aging 20m Cat5e cables can drop to 610Mbps. If you see large variance, suspect the cable first.

2.5Gbps NIC Measured Results

After upgrading to 2.5GbE NICs, cable quality becomes critical. The difference between Cat5e and Cat6 is dramatic.

TestDirectionCableSpeedNotes
1A → BCat62.33 GbpsNear theoretical limit
2B → ACat62.31 GbpsGood
3A → BCat5e (3m)2.17 GbpsShort Cat5e still works
4A → BCat5e (20m, old)1.41 GbpsCat5e bottleneck is clear
5BidirectionalCat6~1.16 Gbps eachFull-duplex confirmed

Cat5e is officially rated for 1Gbps only. Short runs can reach 2.17Gbps, but aging 20m cables drop to 1.41Gbps — negating the 2.5GbE upgrade. If you run 2.5GbE, Cat6 cables are mandatory.

Bottleneck Diagnosis Method

When speed falls below expectations, narrow down the problem through NIC, cable, and switch testing in order.

Step 1: Check NIC Link Speed

# Check NIC link speed
ethtool eth0 | grep Speed
# Expected: Speed: 2500Mb/s  ← Normal
# Problem:  Speed: 100Mb/s   ← Auto-negotiation failed!

# Check NIC driver
ethtool -i eth0 | grep driver

Step 2: Cable Bypass Test

# Connect directly with a short new cable (bypass switch)
# Server A ← Cat6 1m → Server B

iperf3 -c 10.x.x.x -t 30
# If speed is normal → existing cable or switch is the cause
# If speed is still low → NIC or driver issue

Step 3: Check Switch Port

# Check port link speed in switch web interface
# If a 2.5G switch port shows 1G:
# 1. Try replacing the cable
# 2. Connect to a different port
# 3. Reset NIC auto-negotiation

sudo ethtool -s eth0 speed 2500 duplex full autoneg on

Symptom Diagnosis Table

SymptomLikely CauseDiagnosis Method
Only 100MbpsNIC auto-negotiation failureCheck link speed with ethtool
High variance (600-900Mbps)Loose or aging cableReplace cable and re-test
Slow in one direction onlyHalf-duplex modeCheck duplex mode with ethtool
Intermittent near-zero speedPacket drops, switch buffer overflowCheck switch logs, netstat -s

Troubleshooting Case: The Slow Server

A real case from our infrastructure: one server had unusually slow file transfer speeds. Here is the diagnosis process.

Symptom: File transfer speed of only 10MB/s on a 1GbE server.

Step 1 — iperf3 measurement: Only 93Mbps. A 1GbE NIC should deliver ~940Mbps.

Step 2 — ethtool check: Speed: 100Mb/s. The NIC had auto-negotiated down to 100Mbps.

Step 3 — Root cause: Some pins in the Ethernet cable had poor contact. 1Gbps uses all 8 pins, but 100Mbps only needs 4. With faulty pins, the NIC automatically downgraded to 100Mbps.

Fix: Replaced the cable with a new Cat6 cable. Speed recovered to 937Mbps — file transfer speed jumped to ~110MB/s.

Lesson: Most network issues originate at the physical layer (cables, connectors, NICs). Checking link speed with ethtool should always be your first step.

Summary Checklist

  • Use iperf3 -c for basic bandwidth, -R for reverse, --bidir for bidirectional
  • 1Gbps NIC: ~937Mbps on Cat6 is normal. Below 800Mbps needs investigation
  • 2.5Gbps NIC: Cat6 is mandatory. Cat5e varies 1.4-2.2Gbps depending on distance
  • Check NIC link speed with ethtool first — 100Mbps means cable fault
  • For high variance: replace cable first, then try different switch ports
  • Use bidirectional tests to verify full-duplex vs half-duplex mode
  • Measure regularly to maintain a baseline for comparison

Network problems demand numbers, not guesses. iperf3 gives you exact server-to-server bandwidth measurements, and combined with ethtool, you can quickly narrow down the bottleneck to the physical layer, switch, or NIC.