Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does performance tuning for Wi-Fi adapters matter?

On desktops, other than disabling features, can anything fix the problems with i210 and i225 ethernet? Those seem to be the two most common NICs nowadays.

I don't really understand why common networking hardware and drivers are so flawed. There is a lot of attention paid to RISC-V. How about start with a fully open and correct NIC? They'll shove it in there if it's cheaper than an i210. Or maybe that's impossible.



> Does performance tuning for Wi-Fi adapters matter?

If you're willing to potentially sacrifice 10-20% of (max local network) throughput you can drastically improve wifi fairness and improve ping times/reduce bufferbloat (random ping spikes will still happen on wifi though).

There's a huge thread https://forum.openwrt.org/t/aql-and-the-ath10k-is-lovely/590... that has stuff about enabling and tuning aqm, and some of the tradeoffs between throughput and latency.


i225 is just broken but I get excellent performance from i210. 1gb is hardly challenging on a contemporaneous CPU, and the i210 offers 4 queues. What's your beef with i210?


There are a lot of problems with the i210. Here’s a sample:

https://www.google.com/search?q=i210+proxmox+e1000e+disable

Most people don’t really use their NICs “all the time” “with many hosts.” The i210 in particular will hang after a few months of e.g. etcd cluster traffic on 9th and 10th gen Intel, which is common for SFFPCs.

On Windows, the ndis driver works a lot better. Many disconnects in similar traffic load as Linux, and features like receive side coalescing are broken. They also don’t provide proper INFs for Windows server editions, just because.

I assume Intel does all of this on purpose. I don’t think their functionally equivalent server SKUs are this broken.

Apparently the 10Gig patents are expiring very soon. That will make Realtek, Broadcom and Aquantia’s chips a lot cheaper. IMO, motherboards should be much smaller, shipping with BMC and way more rational IO: SFP+, 22110, Oculink, U.2, and PCIe spaced for Infinity Fabric & NVLink. Everyone should be using LVFS for firmware - NVMe firmware, despite being standardized to update, is a complete mess with bugs on every major controller.

I share all of this as someone with experience in operating commodity hardware at scale. People are so wasteful with their hardware.


I like your better I/O idea.

Many systems which cost more than a good car are still coming with the Broadcom 5719 (tg3) from 1999. They have a single transmit queue and the driver is full of workarounds. It's a complete joke these are still supplied today.

SFP would be great but I'd settle for an onboard NIC chipset which was made in the last 10 years.


There are 3 revisions of i225 and Intel essentially got rid of it and launched i226. That one also seems to be problematic [1] . Why is it exponentially harder to make a 2.5gbps NIC when the 1gbps NIC (i210 and i211) has worked well for them. Shouldn't it be trivial to make it 2.5x? They seem to make good 10gbps NICs so I would assume 2.5gbps shouldn't need a 5th try from intel ?

[1] - https://shorturl.at/esCNP


The bugs I am aware of are on the PCIe side. i225 will lock up the bus if it attempts to do PTM to support PTP. That's a pretty serious bug. You would think Intel has this nailed since they invented PCIe and PCI for that matter. Apparently not. Maybe they outsourced it.


This is really interesting for me to read. I encountered a DMA lockup in the hardware by an Ethernet MAC implementation on an ARM chip. It was a Synopsys Designware MAC implementation. It would specifically lockup when PTP was enabled. From my testing, it seemed like it would specifically lockup if some internal queue was overrun. This was speculation on my part, because it would only lockup if I tried to enable timestamping on all packets. It seemed to work alright if the hardware filter was used to only timestamp PTP packets. This can be a significant limitation though, as it can prevent PTP from working with VLANs or DSA switch tags, since the hardware can't identify PTP packets with those extra prefixes.

The PTP timestamps would arrive as a separate DMA transaction after the packet DMA transaction. It very possibly could have been poor integration into the ARM SOC, but your PTP-specific issue on x86 makes me wonder.


I'm interested in clicking your link just not through a shortener. Perhaps just my own bias but figured I'd surface that reaction here. Much more useful to see the domain I'll end up on.



I didn't realise the URL shortener part. Slipped my mind that I could have just pasted the original anandtech link instead.


I have a motherboard with an i225 onboard.

I bought a PCIe I350. That's solved the problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: