by aciswhat on 11/12/20, 12:37 AM with 133 comments
by dsign on 11/12/20, 9:21 AM
- The interaction of HTTP/2 push with browser caches were left unspecified for the most part, and browsers implemented different ad-hoc policies.
- Safari in particular was pretty bad.
- Since HTTP/2 Push worked at a different layer than the rest of a web application, our offering centered around reverse-engineering traffic patterns, with the help of statistics and machine learning. We would find the resources which were more often not cached, and push those.
- HTTP/2 Push, when well implemented, offered reductions in time to DOMContentLoaded in the order of 5 to 30%. However, web traffic is noisy and visitors fall in many different buckets by network connection type and latency. Finding that 5% to 30% performance gain required looking to those buckets. And, DOMContentLoaded doesn't include image loading, and those dominated the overall page loading time.
- As the size of, say, Javascript increases, the gains from using HTTP/2 Push asymptotically tend to zero.
- The PUSH_PROMISE packets did indeed could increase loading time because they needed to be sent when the TCP connection was still cold. At that point in time, each byte costs more latency-wise.
- If a pushed resource was not matched or not needed, the loaded time increased again.
Being a tiny company, we eventually moved on and found other ways of decreasing loading times that were easier for us to implement and maintain and also easier to explain to our customers.
by jaffathecake on 11/12/20, 6:40 AM
Chrome's implementation was best, but the design of HTTP/2 push makes it really hard to do the right thing. Not just when it comes to pushing resources unnecessarily, but also delaying the delivery of higher priority resources.
<link rel="preload"> is much simpler to understand and use, and can be optimised by the browser.
Disclaimer: I work on the Chrome team, but I'm not on the networking team, and wasn't involved in this decision.
by xyzzy_plugh on 11/12/20, 1:13 AM
With the variety of streaming options available now, it really seems antiquated.
by xeeeeeeeeeeenu on 11/12/20, 1:42 AM
by simscitizen on 11/12/20, 1:25 AM
Maybe it is useful outside of the browser context, e.g. in gRPC.
by yc12340 on 11/12/20, 6:33 AM
I have tried to use it once, and the hassle of distinguishing between first time visits and repeated visits is simply not worth it. Even the hassle of using <link rel="preload"> is usually not worth it in large apps — if you have time for that, it can be better spent on reducing size of assets.
by randomtree on 11/12/20, 9:25 AM
I use it on my pet project website, and it allows for a remarkable first page load time.
And I don't have to make all these old-school tricks, like inlining CSS & JS.
HTTP/2 Push allows for such a pleasant website development. You can have hundreds of images on the page, and normally, you'd be latency-bound to load it in a reasonable amount of time. And the way to solve it old-school is to merge them all into a one big image, and use CSS to use parts of the image instead of separate image URLs. This is an ugly solution for a latency problem. Push is so much better!
The fact that 99% of people are too lazy to learn a new trick shouldn't really hamstring people into using 30-year old tricks to get to a decent latency!
by est31 on 11/12/20, 2:02 AM
Server push is most useful in cases where latency is high, i.e. server and client are at different ends of the globe. It helps reduce round trips needed to load a website. Any good CDN has nodes at most important locations so the latency to the server will be low. Thus server push won't be as helpful.
by FeepingCreature on 11/12/20, 6:49 AM
Also, this is very Google: "Well, few people have adopted it over five years, time to remove it." HTTPS is almost as old as HTTP and is only now starting to become universal. Google has no patience, seriously.
by kdunglas on 11/12/20, 10:10 AM
They key point for performance is to send relations in parallel in separate HTTP streams. Even without Server Push Vulcain-like APIs are still faster than APIs relying on compound documents thanks to Preload links and to HTTP/2 / HTTP/3 multiplexing.
Using Preload links also fixes the over-pushing problem (pushing a relation already in a server-side or client-side cache), some limitations regarding authorization (by default most servers don't propagate the Authorization HTTP header nor cookies in the push request), and and is easier to implement.
(By the way Preload links were supported from day 1 by the Vulcain Gateway Server.)
However, using Preload links introduce a bit more latency than using Server Push. Does the theoretical performance gain is worth the added complexity? To be honest I don't know. I guess it doesn't.
Using Preload links combined with Early Hints (the 103 status code - RFC 8297) may totally remove the need for Server Push. And Early Hints are way easier than Server Push to implement (it's even possible in PHP!).
Unfortunately browsers don't support Early Hints yet.
- Chrome bug: https://bugs.chromium.org/p/chromium/issues/detail?id=671310
- Firefox bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1407355
For the API use case, it would be nice that Blink adds support of Early Hints before killing Server Push!
by colinclerk on 11/12/20, 2:23 AM
The serverless/edge technologies becoming available at CDNs are making it easy to imagine "automatic push" could come soon.
Any chance there are folks from Vercel or Netlify here and can shed light on why push hasn't been implemented in their platforms (or if it has)? At first glance, it seems like Next.js in particular (server rendering) is ripe for automatic push.
by rektide on 11/12/20, 1:54 AM
> Chrome currently supports handling push streams over HTTP/2 and gQUIC, and this intent is about removing support over both protocols. Chrome does not support push over HTTP/3 and adding support is not on the roadmap.
I am shocked & terrified that google would consider not supporting a sizable chunk of HTTP in their user-agent. I understand that uptake has been slow. That this is not popular. But I do not see support as optional. This practice, of picking & choosing what to implement of our core standards, of deciding to drop core features that were by concensus agreed upon- because 5 years have passed & we're not sure yet how to use it well yet- is something I bow my head to & just hope, hope we can keep on through.
by _bjh on 11/12/20, 2:40 AM
by francislavoie on 11/12/20, 4:10 AM
by The_rationalist on 11/12/20, 1:25 AM
I had read a technical comment once that told that HTTP 2 push was superior to websocket but couldn't remember why. Also what's the difference between push and server sent events?
by Town0 on 11/12/20, 1:48 AM
by seanwilson on 11/12/20, 1:19 AM
Crafting a fast website is going to be messy and difficult for a good while still.
by tannhaeuser on 11/12/20, 7:44 AM
by bullen on 11/12/20, 11:16 AM
It's simple, debuggable, inherently avoids cache-misses, scales (if you use non-blocking IO and joint concurrent capable language with OS threads).
It also avoids HTTP/TCP head-of-line because you're using a separate socket for your pushes.