> [..] around 70% of all HTTPS requests Firefox issues get HTTP/2 [...]
Frequent use of Google probably puts this number on the higher end without revealing much information about general adaptation.
Personally, I am waiting for HTTP/5, since the speed for new protocol versions seems to be set on "suddenly very fast".
That said, I think HTTP/2 was a good add-on for the protocol.
On the other hand a lot of over-engineered protocols fail or are a giant pain to use. I think we will only see adaptation if there is a real tangible benefit to upgrade infrastructure.
Quic doesn't really convince me yet. It is certainly advantageous for some cases, but it isn't obvious to me. Yes, non-blocking parallel streaming connections are certainly great... 0-RTT? Hm, I don't think the speed advantages are worth the reduced security if used with a payload. Maybe for Google and similar services, but otherwise? Quic needs to re-implement TCPs error checking and puts these mechanism outside of the kernel space. Let's hope we don't see other shitty proprietary protocols that are "similar" to HTTP.
0-RTT is one of those features where the decision was it's better if we build it and in the end nobody uses it (because it's so dangerous) than if we don't build it and then we all wish we had it, because now we need an entirely new protocol to get it.
Protocols that live on top of a transport (QUIC or TLS 1.3 itself) that offers 0-RTT are supposed to explicitly define whether and how it's used. HTTP is drafting such advice.
You should definitely avoid software that "magically" uses 0-RTT today without that definition being completed, particularly client software. Because of how TLS works, if you never use client software that can do 0-RTT, nothing you send can be replayed, so you're safe. The danger only sneaks in if you run client software that does 0-RTT _and_ the server has dangerous behaviour. Well, you can't tell about the server, but you can easily choose not to run that client.
No popular TLS 1.3 clients (e.g. Firefox, Chrome) do 0-RTT today. They've talked about it, and I can imagine it sneaking in for specific jobs where nobody can see how it causes problems, but I do not expect them to screw up and start doing 0-RTT GET /money-transfer?dollars=1million because they've been here before and they know what will happen when some idiot builds a server.
In client software libraries it's a bit scarier. So, if you use an HTTP library and one day it's like "Yay, now we do 0-RTT to make everything faster" that's probably going to need some stern words in a bug report.
> No popular TLS 1.3 clients (e.g. Firefox, Chrome) do 0-RTT today.
This was wrong. 0-RTT is enabled in current Firefox builds. I haven't been able to determine under what circumstances Mozilla now chooses to do 0-RTT, but you can switch it off if you're concerned, it is controlled by the pref security.tls.enable_0rtt_data
And with required authenticated encryption, one would hope an intervening switch or router couldn't accidentally forge a message that's supposed to be hard to forge when you're trying.
The important part of QUIC is that lost packets will not block delivery of all other data being delivered over the same connection, but only the data from any affected streams (for example, a single HTTP request/response will usually be one stream).
Frequent use of Google probably puts this number on the higher end without revealing much information about general adaptation.
Personally, I am waiting for HTTP/5, since the speed for new protocol versions seems to be set on "suddenly very fast".
That said, I think HTTP/2 was a good add-on for the protocol.
On the other hand a lot of over-engineered protocols fail or are a giant pain to use. I think we will only see adaptation if there is a real tangible benefit to upgrade infrastructure.
Quic doesn't really convince me yet. It is certainly advantageous for some cases, but it isn't obvious to me. Yes, non-blocking parallel streaming connections are certainly great... 0-RTT? Hm, I don't think the speed advantages are worth the reduced security if used with a payload. Maybe for Google and similar services, but otherwise? Quic needs to re-implement TCPs error checking and puts these mechanism outside of the kernel space. Let's hope we don't see other shitty proprietary protocols that are "similar" to HTTP.
(I am no web- or network-developer)