Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, we were surprised by just how prevalent middleboxes were that didn't like gaps in sequence spaces. This significantly influenced the design options for MPTCP.

Another problem is that TCP is a bytestream protocol. Apps that stream over TCP don't usually add packet-orientated framing and resync points, so if you lose a packet, the receiver will often need to discard quite a bit of data after the missing packet before they can start decoding again. Effectively this multiplies the effective loss rate. In the extreme, there's the potential for congestion collapse, where lots of packets are being delivered but none of the are useful, so they're all discarded at the receiver.

Edit: I should add - middleboxes often resegment the datastream, merging multiple packets, or splitting large ones. So even if the sender added a header in each segment sent, those headers may not be at the beginning of the segment when it arrives. After a loss, you may not be able to reliable find the next header again.

By the way, that web server at UCL may well be the oldest on the Internet. It's probably the only server left proudly running CERN/3.0 on Sparc hardware since 1994.



Your point about resegmentation (in combination with loss or out-of-order delivery) is why a framing layer, like the one in Minion, is necessary. The Minion paper [3] does a good job of illustrating the problem.

[3]: https://arxiv.org/abs/1103.0463




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: