Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is the right decision. I looked at HTTP/2 push in 2017 and the design is very confusing, and the implementations are pretty bad. https://jakearchibald.com/2017/h2-push-tougher-than-i-though....

Chrome's implementation was best, but the design of HTTP/2 push makes it really hard to do the right thing. Not just when it comes to pushing resources unnecessarily, but also delaying the delivery of higher priority resources.

<link rel="preload"> is much simpler to understand and use, and can be optimised by the browser.

Disclaimer: I work on the Chrome team, but I'm not on the networking team, and wasn't involved in this decision.



Do you know maybe whether there are similar adoption rates in HTTP/3?

As someone that implemented http/1.1 almost feature complete [1], I think that the real problem on the web is the missing of a testsuite.

There needs to be a testsuite for http that both clients and servers can test against, and that is not tied to internal codes of a web browser.

I say this because even a simple spec like 206 with multiple range requests literally never is supported by any web server. Even Nginx, apache or google's own dns over https servers behave differently if the request headers expect multiple response bodies. Let alone the unpredictability of chunked encodings, which is another nightmare.

I really think that there should be an official testsuite that is maintained to reflect the specifications, similar to the intention of the (meanwhile outdated) acid tests.

Adoption rates to new http versions will always stay low if you implement it for months exactly as the spec says, just to figure out that no real world webserver actually implements it in the same way.

[1] https://github.com/tholian-network/stealth/blob/X0/stealth/s...


+1 The complexity of HTTP now is so much that it's probably prohibitively risky to roll your own implementation.


Agree 100% . I never saw a use for server push. I'm glad to see unused features go, web browsers are far too bloated already.

The web would be fine if they stopped adding features today for the next 10 years. The massive complexity of the browser will eventually be a detriment to the platform


When Server Push became a thing, I liked the idea quite a lot, and I find it somewhat sad to see it disappear again, but realistically speaking, it wasn't used that much, so it might be for the best to just let it go.


The big issue with it, IMO, is it simply doesn't work with load balancers. You HAVE to have sticky sessions which, unfortunately, really limits some of the best benefits of http2.


thanks for that post. that and some other resources convinced me then that my time is better spent elsewhere.

in the post it says less then .1% connections in Chrome receive a push event. some people will always try out the cutting edge, but the fact it hasn't spread after several years is a pretty good indicator that it's not producing the expected results.

I don't know why things that are "nice to have, but not essential" and at the same time not really working need to be kept, just because they're in a standard. if it was essential I'd view it differently, but in this case I hope it gets dropped.


Web was always about backwards compatibility. You should be able to contact with HTTP web server deployed in 1995 from modern web browser.

Server push is different, because it's supposed to be an invisible optimization, so it could be dropped without anyone noticing. But most things are not invisible.


> You should be able to contact with HTTP web server deployed in 1995 from modern web browser.

AFAIK, a web server deployed in 1995 would probably be using HTTP/0.9, and I think modern web browsers don't support any HTTP older than HTTP/1.0 anymore.


I tend to disagree. Server push was a cool way for implementing streaming like in hls particularly when you have constrained devices that otherwise would suffer from request latencies.

However, IMHO the Internet has mostly degraded to a huge CDN. Http/2 is often not even handled by the final end point. Decentralized caching and proactive cache control has become a niche.

Having said that, I still dream of a world in which browsers just care about rendering, rather than defacto shaping the future arch of the net on all layers (DoH, https cert policies, quic MTUs, ...)


Note that this is also available via the `Link:` HTTP header. This means that you can get the preload hint to the browser quite early so there shouldn't be too much delay before it makes the request.

Of course if your HTML is small it may still be slightly slower than push. However the advantage is that you don't push cached resources over and over again.


Unfortunately, as far as I'm aware, Link is only a RFC (5988, 8288), and nobody has actually implemented using it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: