from Hacker News

Progressive JSON

by kacesensitive on 6/1/25, 12:58 AM with 235 comments

  • by goranmoomin on 6/1/25, 2:20 AM

    Seems like some people here are taking this post literally, as in the author (Dan Abramov) is proposing a format called Progressive JSON — it is not.

    This is more of a post on explaining the idea of React Server Components where they represent component trees as javascript objects, and then stream them on the wire with a format similar to the blog post (with similar features, though AFAIK it’s bundler/framework specific).

    This allows React to have holes (that represent loading states) on the tree to display fallback states on first load, and then only display the loaded component tree afterwards when the server actually can provide the data (which means you can display the fallback spinner and the skeleton much faster, with more fine grained loading).

    (This comment is probably wrong in various ways if you get pedantic, but I think I got the main idea right.)

  • by jatins on 6/1/25, 5:46 AM

    I have seen Dan's "2 computers" talk and read some of his recent posts trying to explore RSC and their benefits.

    Dan is one of the best explainers in React ecosystem but IMO if one has to work this hard to sell/explain a tech there's 2 possibilities 1/ there is no real need of tech 2/ it's a flawed abstraction

    #2 seems somewhat true because most frontend devs I know still don't "get" RSC.

    Vercel has been aggressively pushing this on users and most of the adoption of RSC is due to Nextjs emerging as the default React framework. Even among Nextjs users most devs don't really seem to understand the boundaries of server components and are cargo culting

    That coupled with fact that React wouldn't even merge the PR that mentions Vite as a way to create React apps makes me wonder if the whole push for RSC is for really meant for users/devs or just as a way for vendors to push their hosting platforms. If you could just ship an SPA from S3 fronted with a CDN clearly that's not great for Vercels and Netflifys of the world.

    In hindsight Vercel just hiring a lot of OG React team members was a way to control the future of React and not just a talent play

  • by hyfgfh on 6/1/25, 5:17 AM

    The thing I have seem in performance is people trying to shave ms loading a page, while they fetch several mbs and do complex operations in the FE, when in the reality writing a BFF, improving the architecture and leaner APIs would be a more productive solution.

    We tried to do that with GraphQL, http2,... And arguably failed. Until we can properly evolve web standards we won't be able to fix the main issue. Novel frameworks won't do it either

  • by usrbinbash on 6/2/25, 8:31 AM

    Or here is a different approach:

    We acknowledge that streaming data is not a problem that JSON was intended, or designed, to solve, and ... not do that.

    If an application has a usecase that necessitates sending truly gigantic JSON objects across the wire, to the point where such a scheme seems like a good idea, the much better question to ask is "why is my application sending ginormeous JSON objects again?"

    And the answer is usually this:

    Fat clients using bloated libraries and ignoring REST, trying to shoehorn JSON into a "one size fits all" solution, sending first data, then data + metadata, then data + metadata + metadata describing the interface, because finally we came full circle and re-invented a really really bad version of REST that requires several MB of minified JS for the browser to use.

    Again, the solution is not to change JSON, the solution is to not do the thing that causes the problem. Most pages don't need a giant SPA framework.

  • by 3cats-in-a-coat on 6/1/25, 2:04 AM

    I'll try to explain why this is a solution looking for a problem.

    Yes, breadth-first is always an option, but JSON is a heterogenous structured data source, so assuming that breadth-first will help the app start rendering faster is often a poor assumption. The app will need a subset of the JSON, but it's not simply the depth-first or breadth-first first chunk of the data set.

    So for this reason what we do is include URLs in JSON or other API continuation identifiers, to let the caller choose where in the data tree/graph they want to dig in further, and then the "progressiveness" comes from simply spreading your fetch operation over multiple requests.

    Also often times JSON is deserialized to objects so depth-frst or breadth-first doesn't matter, as the object needs to be "whole" before you can use it. Hence again: multiple requests, smaller objects.

    In general when you fetch JSON from a server, you don't want it to be so big that you need to EVEN CONSIDER progressive loading. HTML needs progressive loading because a web page can be, historically especially, rather monolithic and large.

    But that's because a page is (...was) static. Thus you load it as a big lump and you can even cache it as such, and reuse it. It can't intelligently adapt to the user and their needs. But JSON, and by extension the JavaScript loading it, can adapt. So use THAT, and do not over-fetch data. Read only what you need. Also, JSON is often not cacheable as the data source state is always in flux. One more reason not to load a whole lot in big lumps.

    Now, I have a similar encoding with references, which results in a breadth-first encoding. Almost by accident. I do it for another reason and that is structural sharing, as my data is shaped like a DAG not like a tree, so I need references to encode that.

    But even though I have breadth-first encoding, I never needed to progressively decode the DAG as this problem should be solved in the API layer, where you can request exactly what you need (or close to it) when you need it.

  • by xelxebar on 6/1/25, 1:32 AM

    Very cool point, and it applies to any tree data in general.

    I like to represent tree data with parent, type, and data vectors along with a string table, so everything else is just small integers.

    Sending the string table and type info as upfront headers, we can follow with a stream of parent and data vector chunks, batched N nodes at a time. Tye depth- or breadth-first streaming becomes a choice of ordering on the vectors.

    I'm gonna have to play around with this! Might be a general way to get snappier load time UX on network bound applications.

  • by Velorivox on 6/1/25, 1:41 AM

    99.9999%* of apps don't need anything nearly as 'fancy' as this, if resolving breadth-first is critical they can just make multiple calls (which can have very little overhead depending on how you do it).

    * I made it up - and by extension, the status quo is 'correct'.

  • by jerf on 6/1/25, 2:27 PM

    There are at least two other alternatives I'd reach for before this.

    Probably the simplest one is to refactor the JSON to not be one large object. A lot of "one large objects" have the form {"something": "some small data", "something_else": "some other small data", results: [vast quantities of identically-structured objects]}. In this case you can refactor this to use JSON lines. You send the "small data" header bits as a single object. Ideally this incorporates a count of how many other objects are coming, if you can know that. Then you send each of the vast quantity of identically-structed objects as one-line each. Each of them may have to be parsed in one shot but many times each individual one is below the size of a single packet, at which point streamed parsing is of dubious helpfulness anyhow.

    This can also be applied recursively if the objects are then themselves large, though that starts to break the simplicity of the scheme down.

    The other thing you can consider is guaranteeing order of attributes going out. JSON attributes are unordered, and it's important to understand that when no guarantees are made you don't have them, but nothing stops you from specifying an API in which you, the server, guarantee that the keys will be in some order useful for progressive parsing. (I would always shy away from specifying incoming parameter order from clients, though.) In the case of the above, you can guarantee that the big array of results comes at the end, so a progressive parser can be used and you will guarantee that all the "header"-type values come out before the "body".

    Of course, in the case of a truly large pile of structured data, this won't work. I'm not pitching this as The Solution To All Problems. It's just a couple of tools you can use to solve what is probably the most common case of very large JSON documents. And both of these are a lot simpler than any promise-based approach.

  • by turtlebits on 6/1/25, 2:33 AM

    Progressive JPEG make sense, because it's a media file and by nature is large. Text/HTML on the other hand, not so much. Seems like a self-inflicted solution where JS bundles are giant and now we're creating more complexity by streaming it.
  • by atombender on 6/1/25, 11:35 AM

    You could stream incrementally like this without explicitly demarcating the "holes". You can simply send the unfinished JSON (with empty arrays as the holes), then compute the next iteration and send a delta, then compute the next and send a delta, and so on.

    A good delta format is Mendoza [1] (full disclosure: I work at Sanity where we developed this), which has Go and JS/TypeScript [2] implementations. It expresses diffs and patches as very compact operations.

    Another way is to use binary digging. For example, zstd has some nifty built-in support for diffing where you can use the previous version as a dictionary and then produce a diff that can be applied to that version, although we found Mendoza to often be as small as zstd. This approach also requires treating the JSON as bytes and keeping the previous binary snapshot in memory for the next delta, whereas a Mendoza patch can be applied to a JavaScript value, so you only need the deserialized data.

    This scheme would force you to compare the new version for what's changed rather than plug in exactly what's changed, but I believe React already needs to do that? Also, I suppose the Mendoza applier could be extended to return a list of keys that were affected by a patch application.

    [1] https://github.com/sanity-io/mendoza

    [2] https://github.com/sanity-io/mendoza-js

  • by camgunz on 6/1/25, 9:32 AM

    I don't mean to be dismissive, but haven't we solved this by using different endpoints? There's so many virtues: you avoid head of line blocking; you can implement better filtering (eg "sort comments by most popular"); you can do live updates; you can iterate on the performance of individual objects (caching, etc).

    ---

    I broadly see this as the fallout of using a document system as an application platform. Everything wants to treat a page like a doc, but applications don't usually work that way, so lots of code and infra gets built to massage the one into the other.

  • by jarym on 6/1/25, 3:34 AM

    This appears conceptually similar to something like line-delimited JSON with JSON Patch[1].

    Personally I prefer that sort of approach - parsing a line of JSON at a time and incrementally updating state feels easier to reason and work with (at least in my mind)

    [1] https://en.wikipedia.org/wiki/JSON_Patch

  • by rictic on 6/1/25, 4:40 PM

    If you've got some client side code and want to parse and render JSON progressively, try out jsonriver: https://github.com/rictic/jsonriver

    Very simple API, takes a stream of string chunks and returns a stream of increasingly complete values. Helpful for parsing large JSON, and JSON being emitted by LLMs.

    Extensively tested and performance optimized. Guaranteed that the final value emitted is identical to passing the entire string through JSON.parse.

  • by aljow on 6/1/25, 1:35 AM

    If it has to be mangled to such an extent to do this, then it seems reasonable to assume JSON is the wrong format for the task.

    Better to rethink it from scratch instead of trying to put a square peg in a round hog.

  • by harrall on 6/1/25, 4:45 AM

    I don’t think progressive loading is innovative.

    What is innovative trying to build a framework that does it for you.

    Progressive loading is easy, but figuring out which items to progressively load and in which order without asking the developer/user to do much extra config is hard.

  • by jimmcslim on 6/1/25, 1:53 AM

    I understand the GraphQL has fallen out of favour somewhat, but wasn’t it intended to solve for this?
  • by yen223 on 6/1/25, 2:26 AM

    I've never really thought about how all the common ways we serialise trees in text (JSON, s-expressions, even things like tables of content, etc), serialise them depth-first.

    I suppose it's because doing it breadth-first means you need to come up with a way to reference items that will arrive many lines later, whereas you don't have that need with depth-first serialisation.

  • by pjungwir on 6/1/25, 12:26 PM

    Does this scheme give a way to progressively load slices of an array? What I want is something like this:

        ["foo", "bar", "$1"]
    
    And then we can consume this by resolving the Promise for $1 and splatting it into the array (sort of). The Promise might resolve to this:

        ["baz", "gar", "$2"]
    
    And so on.

    And then a higher level is just iterating the array, and doesn't have to think about the promise. Like a Python generator or Ruby enumerator. I see that Javascript does have async generators, so I guess you'd be using that.

    The "sort of" is that you can stream the array contents without literally splatting. The caller doesn't have to reify the whole array, but they could.

    EDIT: To this not-really-a-proposal I propose adding a new spread syntax, ["foo", "bar", "...$1"]. Then your progressive JSON layer can just deal with it. That would be awesome.

  • by roxolotl on 6/1/25, 1:54 AM

    This feels very similar to JSON API links[0]. This is a great way to implement handling resolving those links on the frontend though.

    0: https://jsonapi.org/format/#document-links

  • by tills13 on 6/1/25, 6:18 AM

    Holy the pomp in this thread. It would perhaps help for some people here to have the context that this isn't some random person on the internet but Dan Abromov -- probably one of the most influential figures in building React (if not one of the creators, iirc)
  • by jongjong on 6/1/25, 3:41 PM

    This is an interesting idea. I solved this problem in a different way by loading each resource/JSON individually, using foreign keys to link them on the front end. This can add latency/delays with deeply nested child resources but it was not a problem for any of the use cases I came across (pages/screens rarely display parent/child resources connected by more than 3 hops; and if they do, they almost never need them to be loaded all at once).

    But anyway this is a different custom framework which follows the principle of resource atomicity and a totally different direction than GraphQL approach which follows the principle of aggregating all the data into a big nested JSON. The big JSON approach is convenient but it's not optimized for this kind of lazy loading flexibility.

    IMO, resource atomicity is a superior philosophy. Field-level atomicity is a great way to avoid conflicts when supporting real-time updates. Unfortunately nobody has shown any interest or is even aware of its existence as an alternative.

    We are yet to figure out that maybe the real issue with REST is that it's not granular enough (should be field granularity, not whole resource)... Everyone knows HTTP has heavy header overheads, hence you can't load fields individually (there would be too many heavy HTTP requests)... This is not a limitation for WebSockets however... But still, people are clutching onto HTTP; a transfer protocol originally designed for hypertext content, as their data transport.

  • by nixpulvis on 6/1/25, 1:24 AM

    I've always liked the idea of putting latency requirements in to API specifications. Maybe that could help delimit what is and is not automatically inlined as the author proposes.
  • by bob1029 on 6/1/25, 2:24 AM

    > We can try to improve this by implementing a streaming JSON parser.

    In .NET land, Utf8JsonReader is essentially this idea. You can parse up until you have everything you need and then bail on the stream.

    https://learn.microsoft.com/en-us/dotnet/standard/serializat...

  • by bilater on 6/1/25, 7:45 PM

    This is something I've been thinking about ever since I saw BAML. Progressive streaming for JSON should absolutely be a first class thing in Javascript-land.

    I wonder if Gemini Diffusion (and that class of models) really popularize this concept as the tokens streamed in won't be from top to bottom.

    Then we can have a skeleton response that checks these chunks, updates those value and sends them to the UI.

  • by aperturecjs on 6/1/25, 4:55 AM

    I previously wrote a prototype of streaming a JSON tree this way:

    https://github.com/rgraphql/rgraphql

    But it was too graphql-coupled and didn't really take off, even for my own projects.

    But it might be worth revisiting this kind of protocol again someday, it can tag locations within a JSON response and send updates to specific fields (streaming changes).

  • by inglor on 6/1/25, 9:15 AM

    I am not sure the wheel can be rediscovered many more times but definitely check out Kris's work from around 2010-2012 around q-connection and streaming/rpc of chunks of data. Promises themselves have roots in this and there are better formats for this.

    Check our mark miller's E stuff and thesis - this stuff goes all the way back to the 80s.

  • by uncomplete on 6/1/25, 1:29 AM

    jsonl is json objects separated by endline characters. Used in Bedrock batch processing.
  • by jsnelgro on 6/3/25, 3:53 AM

    I feel like this is a great practical example showcasing the value of static types and designing your data model up front. If you know the structure of the object, you can put in placeholders while the real thing loads in. Then you get to choose how it loads in. Ideally you can stream the patches via serialized actions. This is basically Redux/Elm in a nutshell where the action objects can come from events sent via SSE/websockets/polling/etc.

    Reducing a change log is about as stateless, functional, and elegant as it gets. I love these sorts of designs but reality seems to complicate them in unexpected ways unfortunately. Still worth striving for though!

  • by alganet on 6/2/25, 2:24 AM

    This breaks JSON. Now we need a different JSON that escapes the $ sign, and it is incompatible with other JSON parsers.

    Also, not a single note about error handling?

    There is already a common practice around streaming JSON content. One JSON document per line. This also breaks JSON (removal of newline whitespace), but the resulting documents are backwards compatible (a JSON parser can read them).

    Here's a simpler protocol:

    Upon connecting, the first line sent by the server is a JavaScript function that accepts 2 nullable parameters (a, b) followed by two new lines. All the remaining lines are complete JSON documents, one per line.

    The consuming end should read the JavaScript function followed by two new lines and execute it once passing a=null, b=null.

    If that succeeds, it stores the return value and moves to the next line. Upon reading a complete JSON, it executes the function passing a=previousReturn, b=newDocument. Do this for every line consumed.

    The server can indicate the end of a stream by sending an extra new line after a document. It can reuse the socket (send another function, indicating new streamed content).

    Any line that is not a JavaScript function, JSON document or empty is considered an error. When one is found by the consuming end, it should read at most 1024 bytes from the server socket and close the connection.

    --

    TL;DR just send one JSON per line and agree on a reduce function between the producer and consumer of objects.

  • by PetahNZ on 6/1/25, 10:54 AM

    Reminds me of Oboe.js

    https://oboejs.com/

  • by KronisLV on 6/1/25, 1:36 AM

    I feel like in an ideal world, this would start in the DB: your query referencing objects and in what order to return them (so not just a bunch of wide rows, nor multiple separate queries) and as the data arrives, the back end could then pass it on to the client.
  • by sriku on 6/1/25, 7:23 AM

    Would a stream where each entry is a list of kv-pairs work just as well? The parser is then expected to apply the kv pairs to the single json object as it is receiving them. The key would describe a json path in the tree - like 'a.b[3].c'.
  • by Aeolun on 6/1/25, 5:14 AM

    I think the problem with this is that it makes a very simple thing a lot harder. I don’t want to try and debug a JSON stream that can fail at any point. I just want to send a block of text (which I generate in 2ms anyway) and call it a day.
  • by nesarkvechnep on 6/1/25, 7:22 PM

    In my opinion, REST, proper, hypertext driven, solves the same problems. When you have small, interlinked, cacheable resources, the client decides how many relations to follow.
  • by techpression on 6/1/25, 10:32 AM

    Reading this makes me even happier I decided on Phoenix LiveView a while back. React has become a behemoth requiring vendor specific hosting (if you want the bells and whistles) and even a compiler to overcome all the legacy.

    Most of the time nobody needs this, make sure your database indexes are correct and don’t use some under powered serverless runtime to execute your code and you’ll handle more load than most people realize.

    If you’re Facebook scale you have unique problems, most of us doesn’t.

  • by geokon on 6/1/25, 3:03 PM

    This is outside my realm of experience, isn't this kind part of the utility of a triple-store? Isn't that the canonical way to flatten trees data to a streamable sequence?

    I think you'd also need to have some priority mechanism for which order to send your triple store entries (so you get the same "breadth first" effect) .. and correctly handle missing entries.. but that's the data structure that comes to mind to build off of

  • by rabiescow on 6/3/25, 3:33 AM

    I think the main issue with this goes against the fundamental vore principles of a json object has no internal order. If it was an ordered object it would be very different and maybe feasible. But you would have to change the complete standard if JSON. It makes no sense speaking of an object without order as if it has order in terms of "header" etc...
  • by aaronvg on 6/1/25, 7:56 PM

    You might also find Semantic Streaming interesting. It's t he same concept but applied to llm token streaming. It's used in BAML (the ai framework). https://www.boundaryml.com/blog/semantic-streaming

    I'm one of the developers of BAML.

  • by dejj on 6/1/25, 7:26 AM

    Reminds me of Aftertext, which uses backward references to apply markup to earlier parts of the data.

    Think about how this could be done recursively, and how scoping could work to avoid spaghetti markup.

    Aftertext: https://breckyunits.com/aftertext.html

  • by polyomino on 6/1/25, 1:21 AM

    We encountered this problem when converting audio only LLM applications to visual + audio. The visuals would increase latency by a lot since they need to be parsed completely before displaying, whereas you can just play audio token by token and wait for the LLM to generate the next one while audio is playing.
  • by tarasglek on 6/1/25, 8:19 AM

    People put so much effort into streaming Json parsing whereas we have a format called Yaml which takes up less characters on the wire and happens to work incrementally out of the box meaning that you can reparse the stream as it's coming in without having to actually do any incremental parsing
  • by OrangeMusic on 6/5/25, 8:21 AM

    GraphQL has a defer mode that looks like this. You receive the fast pieces first, and the slower pieces of the json come later, with the path to where they should be attached to.
  • by 65 on 6/1/25, 3:03 PM

    Here's a random, crazy idea:

    What if instead of streaming JSON, we streamed CSV line by line? That'd theoretically make it way easier to figure out what byte to stream from and then parse the CSV data into something usable... like a Javascript object.

  • by metalrain on 6/1/25, 8:25 AM

    It feels like this idea needs Cap'n'Proto style request inlining so client can choose what parts to stream instead of getting everything asynchronously.

    https://capnproto.org/

  • by anonzzzies on 6/1/25, 7:34 AM

    So is there a library / npm to do this? Even his not good cases example; just making partial JSON to parse all the time. I don't care if it's just the top and missing things, as long as it always parses as legal json.
  • by Existenceblinks on 6/1/25, 8:21 AM

    It's useless as data is not just some graphic semantic, they have relation, business rules on top, not ready to interact with if not all are ready, loaded.
  • by methods21 on 6/6/25, 11:58 PM

    Bring back XML or a better format than both, or at least a DTD for JSON.
  • by sillyboi on 6/2/25, 1:03 PM

    Honestly, this approach feels like it adds a lot of unnecessary complexity. It introduces a custom serialization structure that can easily lead to subtle UI bugs and a nightmare of component state tracking. The author seems to be solving two issues at once: large payloads and stream-structured delivery. But the latter only really arises because of the former.

    For small to medium JSON responses, this won't improve performance meaningfully. It’s hard to imagine this being faster or more reliable than simply redesigning the backend to separate out the heavy parts (like article bodies or large comment trees) and fetch them independently. Or better yet, just use a proper streaming response (like chunked HTTP or GraphQL @defer/@stream).

    In practice, trying to progressively hydrate JSON this way may solve a niche problem while creating broader engineering headaches.

  • by tacone on 6/2/25, 11:54 AM

    I don't really like the use of comments to mark variables, an ad-hoc syntax would be probably a better idea.
  • by behnamoh on 6/1/25, 1:19 AM

    I think the pydantic library has something similar that involves validating streaming JSON from large language models.
  • by ChrisMarshallNY on 6/1/25, 1:57 AM

    This would be good.

    I got really, really sick of XML, but one thing that XML parsers have always been good at, is realtime decoding of XML streams.

    It is infuriating, waiting for a big-ass JSON file to completely download, before proceeding.

    Also JSON parsers can be memory hogs (but not all of them).

  • by yencabulator on 6/1/25, 7:42 PM

    SvelteKit has something like this to facilitate loading data where some of the values are Promises. I don't think the format is documented for external consumption, but it basically does this: placeholders for values where the JSON value at that point is still loading, replaced by streaming the results as they complete.

    https://svelte.dev/docs/kit/load#Streaming-with-promises

  • by K0IN on 6/2/25, 9:44 AM

    For me this seems over complicated, or am i missing something?

    Any benefits using this over jsonl + json patch?

  • by jto1218 on 6/1/25, 4:46 PM

    typically if we need to lazy load parts of the data model we make multiple calls to the backend for those pieces. And our redux state has indicators for loading/loaded so we can show placeholders. Is the idea that that kind of setup is inefficient?
  • by efitz on 6/1/25, 3:08 AM

    What if we like, I don’t know, you know, separate data from formatting?
  • by its-summertime on 6/1/25, 2:17 AM

    why send the footer above the comments? Maybe its not a footer then but a sidebar? Should be treated as a sidebar then? Besides this could all kinda be solved by still using plain streaming json and sending .comments last?
  • by 1vuio0pswjnm7 on 6/2/25, 2:05 AM

    "Because the format is JSON, you're not going to have a valid object tree until the last byte loads. You have to wait for the entire thing to load, then call JSON.parse, and then process it.

    I have a filter I wrote that just reformats JSON into line-delimited text that can be processed immediately by line-oriented UNIX utilities. No waiting.

    "The client can't do anything with JSON until the server sends the last byte."

    "Would you call [JSON] good engineering?"

    I would not call it "engineering". I would call it design.

    IMO, djb's netstrings^1 is better design. It inspired similar designs such as bencode.^2

    1. https://cr.yp.to/proto/netstrings.txt (1997)

    2. https://wiki.theory.org/BitTorrentSpecification (2001)

    "And yet [JSON's] the status quo-that's how 99.9999%^* of apps send and process JSON."

    Perhaps "good" does not necessarily correlate with status quo and popularity.

    Also, it is worth considering that JSON was created for certain popular www browsers. It could piggyback on the popularity of that software.

  • by izger on 6/1/25, 12:28 PM

    Interesting idea. Another way to implement the same without breaking json protocol framing is just sent {progressive: "true"} {a:"value"} {b:"value b"} {c: {d}c:"value b"} .. {progressive: "false"}

    and have

    { progressive: "false", a:"value", b:"value b", .. }

    on top of that add some flavor of message_id, message_no (some other on your taste) and you will have a protocol to consistently update multiple objects at a time.

  • by philippta on 6/1/25, 7:26 AM

    This sounds suspiciously similar to CSV.
  • by defraudbah on 6/1/25, 2:54 PM

    nah, I got enough with TCP
  • by EugeneOZ on 6/1/25, 10:37 PM

    Just don't resurrect HATEOAS monster, please.
  • by creatonez on 6/1/25, 8:55 PM

    This HN thread is fascinating. A third of the commenters here only read 1/3 of the article, another third read 2/3 of the article, and another third actually read the whole article. It's almost like the people in this thread linearly loaded the article and stopped at random points.

    Please, don't be the next clueless fool with a "what about X" or "this is completely useless" response that is irrelevant to the point of the article and doesn't bother to cover the use case being proposed here.

  • by inglor on 6/1/25, 9:15 AM

    I am not sure the wheel can be rediscovered many more times but definitely check out Kris's work from around 2010-2012 around q-connection and streaming/rpc of chunks of data. Promises themselves have roots in this and there are better formats for this.

    Check our mark miller's E stuff and thesis - this stuff goes all the way back to the 80s.

  • by okasaki on 6/1/25, 1:59 AM

    Reinventing pagination

    ?page=3&size=100

  • by slt2021 on 6/1/25, 3:44 AM

    if you ever feel the need to send progressive JSON - just zip it and don't bother solving fake problem at the wrong abstraction layer
  • by yawaramin on 6/1/25, 4:54 AM

    > I’d like to challenge more tools to adopt progressive streaming of data.

    It's a solved problem. Use HTTP/2 and keep the connection open. You now have effectively a stream. Get the top-level response:

        {
          header: "/posts/1/header",
          post: "/posts/1/body",
          footer: "/posts/1/footer"
        }
    
    Now reuse the same connection to request the nested data, which can all have more nested links in them, and so on.