TCP BBR — Quick Field Notes
Switched from CUBIC to BBR on this VPS after noticing throughput drops during concurrent transfers. Here are the actual numbers and a few gotchas.
Enabling BBR
echo 'net.core.default_qdisc=fq' | tee /etc/sysctl.d/99-bbr.conf
echo 'net.ipv4.tcp_congestion_control=bbr' | tee -a /etc/sysctl.d/99-bbr.conf
sysctl --system
Requires kernel ≥4.9. Ubuntu 24.04 ships 6.8 so there are no compatibility concerns.
The fq_codel interaction
By default, Ubuntu sets default_qdisc=fq_codel. BBR works better paired with plain fq (fair queue without the CoDel active-queue-management layer). The two aren't incompatible, but fq is what the BBR paper assumes and what Google runs in production. Switch explicitly — don't rely on defaults.
Observed differences
- Single-stream throughput: roughly equal on a low-BDP path.
- Multiple concurrent streams: BBR was noticeably less aggressive about starving shorter flows.
- Packet loss tolerance: BBR kept throughput up at 2–3% synthetic loss where CUBIC dropped sharply.
None of this is surprising — it matches the paper. Worth doing on any VPS with non-trivial RTT.