by Igor_Wiwi on 12/25/24, 9:35 PM with 150 comments
by kdunglas on 12/25/24, 10:43 PM
At the core of Mercure is the hub. It is a standalone component that maintains persistent SSE (HTTP) connections to the clients, and it exposes a very simple HTTP API that server apps and clients can use to publish. POSTed updates are broadcasted to all connected clients using SSE. This makes SSE usable even with technologies not able to maintain persistent connections such as PHP and many serverless providers.
Mercure also adds nice features to SSE such as a JWT-based authorization mechanism, the ability to subscribe to several topics using a single connection, events history, automatic state reconciliation in case of network issue…
I maintain an open-source hub written in Go (technically, a module for the Caddy web server) and a SaaS version is also available.
Docs and code are available on https://mercure.rocks
by dugmartin on 12/25/24, 10:20 PM
“Warning: When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6).”
by piccirello on 12/25/24, 10:45 PM
by apitman on 12/25/24, 10:27 PM
For my use cases the main limitations of SSE are:
1. Text-only, so if you want to do binary you need to do something like base64
2. Browser connection limits for HTTP/1.1, ie you can only have ~6 connections per domain[0]
Connection limits aren't a problem as long as you use HTTP/2+.
Even so, I don't think I would reach for SSE these days. For less latency-sensitive and data-use sensitive applications, I would just use long polling.
For things that are more performance-sensitive, I would probably use fetch with ReadableStream body responses. On the server side I would prefix each message with a 32bit integer (or maybe a variable length int of some sort) that gives the size of the message. This is far more flexible (by allowing binary data), and has less overhead compared to SSE, which requires 7 bytes ("data:" + "\n\n") of overhead for each message.
by Tiberium on 12/25/24, 11:08 PM
For example, my friend used an LLM proxy that sends keepalive/queue data as SSE comments (just for debugging mainly), but it didn't work for Gemini, because someone at Google decided to parse SSE with a regex: https://github.com/google-gemini/generative-ai-js/blob/main/... (and yes, if the regex doesn't match the complete line, the library will just throw an error)
by recursivedoubts on 12/25/24, 10:11 PM
It was developed using Go & NATS as backend technologies, but works with any SSE implementation.
Worth checking out if you want to explore SSE and what can be achieved w/it more deeply. Here is an interview with the author:
by deaf_coder on 12/26/24, 4:21 PM
> SSE works seamlessly with existing HTTP infrastructure:
I'd be careful with that assumption. I have tried using SSE through some 3rd party load balancer at my work and it doesn't work that well. Because SSE is long-lived and doesn't normally close immediately, this load balancer will keep collecting and collecting bytes from the server and not forward it until server closes the connection, effectively making SSEs useless. I had to use WebSockets instead to get around this limitation with the load balancer.
by hamandcheese on 12/26/24, 12:13 AM
It turns out, Firefox counts SSE connections against the 6 host max connections limit, and gives absolutely no useful feedback that it's blocking the subsequent requests due to this limit (I don't remember the precise error code and message anymore, but it left me very clueless for a while). It was only when I stared at the lack of corresponding server side logs that it clicked.
I don't know if this same problem happens with websockets or not.
by RevEng on 12/26/24, 3:05 PM
The suggestions I've found online for how to deal with the newline issue are to fold together consecutive newlines, but this loses formatting of some documents and otherwise means there is no way to transmit data verbatim. That might be fine for HTML or other text formats where newlines are pretty much optional, but it sucks for other data types.
I'm happy to have something like SSE but the protocol needs more time to cook.
by ramon156 on 12/25/24, 10:12 PM
Currently at work I'm having issues because - Auth between an embedded app and javascript's EventSource is not working, so I have to resort to a Microsoft package which doesn't always work. - Not every tunnel is fond of keep-alive (Cloudflare), so I had to switch to ngrok (until I found out they have a limit of 20k requests).
I know this isn't the protocol's fault, and I'm sure there's something I'm missing, but my god is it frustrating.
by _caw on 12/26/24, 3:53 AM
This is false. SSE is not supported on many proxies, and isn't even supported on some common local proxy tooling.
by anshumankmr on 12/26/24, 9:59 AM
It was a pain to figure out how to get it to work in a ReactJS codebase I was working on then and from what I remember Axios didn't support it then so I had to use native fetch to get it to work.
by schmichael on 12/25/24, 11:02 PM
by cle on 12/26/24, 5:32 PM
I wouldn't characterize this as "automatic", you have to do a lot of manual work to support reconnection in most cases. You need to have a meaningful "event id" that can be resumed on a new connection/host somewhere else with the Last-Event-Id header. The plumbing for this event id is the trivial part IMO. The hard part is the server-side data synchronization, which is left as an exercise for the reader.
Also, God help you if your SSE APIs have side effects. If the API call is involved in a sequence of side-effecting steps then you'll enter a world of pain by using SSE. Use regular HTTP calls or WebSockets. (Mostly b/c there's no cancellation ack, so retries are often racy.)
by programmarchy on 12/25/24, 10:14 PM
by upghost on 12/25/24, 11:58 PM
Secondly, on iOS mobile, I've noticed that the EventSource seems to fall asleep at some point and not wake up when you switch back to the PWA. Does anyone know what's up with that?
by wlonkly on 12/27/24, 11:03 PM
Also, surprised nobody's brought up PointCast[1] yet. Dotcom bubble, or ahead of their time?
(Aside: while looking for a good reference link for Pointcast, I found an All Things Considered episode[2] about it from 1996!)
[1] https://www.ecommerce-digest.com/early-dot-com-failure-case-...
by henning on 12/26/24, 12:03 AM
by lakomen on 12/26/24, 11:53 AM
https://developer.mozilla.org/en-US/docs/Web/API/Server-sent...
Edit: I see someone already posted about that
by yu3zhou4 on 12/25/24, 10:47 PM
by aniketchopade on 12/26/24, 12:43 PM
by est on 12/26/24, 6:37 AM
For FastAPI if you want some hooks when client disconnects aka nginx 499 errors, follow this simple tip
https://github.com/encode/starlette/discussions/1776#discuss...
by ksajadi on 12/26/24, 4:39 AM
by fitsumbelay on 12/26/24, 8:35 AM
I have a hard time imagining the tech's limits outside of testing scenarios so some of the examples brought up here are interesting
by junto on 12/27/24, 7:55 PM
by sbergjohansen on 12/25/24, 10:57 PM
https://news.ycombinator.com/item?id=30403438 (100 comments)
by benterix on 12/26/24, 12:13 PM
by Tiberium on 12/25/24, 11:20 PM
by whatever1 on 12/25/24, 11:20 PM
by crowdyriver on 12/26/24, 9:38 AM
by condiment on 12/25/24, 10:12 PM
I’m not sure this makes sense in 2024. Pretty much every web server supports websockets at this point, and so do all of the browsers. You can easily impose the constraint on your code that communication through a websocket is mono-directional. And the capability to broadcast a message to all subscribers is going to be deceptively complex, no matter how you broadcast it.