by FragrantRiver on 1/26/22, 10:28 AM with 42 comments
by detaro on 1/26/22, 11:44 AM
This same blog already had a copied submission yesterday, so it seems it regularly does this: https://news.ycombinator.com/item?id=30061113
by yawniek on 1/26/22, 11:18 AM
by mindwok on 1/26/22, 11:04 AM
Also the comment about Redis not being persistent, Redis has lots of persistence options that allow you to choose trade offs between durability and performance, with the most strict setting using an append only log of every query.
Decent 1000 mile view of these solutions, but some more depth would have been nice.
by cebert on 1/26/22, 11:13 AM
This approach has been working well for us without the need of Kafka with our serverless apps. I was curious if anyone is doing something similar.
by coffeeri on 1/26/22, 11:21 AM
[0] https://nats.io/ [1] https://github.com/nats-io/nats-server [2] https://nats.io/download/#nats-clients
by Lio on 1/26/22, 11:16 AM
It's what I use when I don't know what I should use.
(In fact these days I might even be tempted to start with a simple PostgreSQL based queue and only swap to Redis later if it becomes clear that's what's needed).
I guess a better approach might be to carefully analyse requirements up front but if those requirements aren't known at the time you start the project it's useful just to get you going.
by jpgvm on 1/26/22, 12:23 PM
The most important features are normally not message liveliness or throughput (complex routing can definitely be a defining feature however). This is because both are fairly easy to solve for - especially in absence of other constraints.
Much more important are durability, ordering, partitioning, acknowledgement models, fencing/isolation/failure of both brokers and consumers, etc.
These are all very nuanced things but ultimately determine which systems can be used for which applications.
A lot of people with rush to recommend Kafka but it's actually a rather narrow solution, it's distributed log model is definitely the right way to persist and replicate messages but it's fetch and consumer group APIs are essentially hot garbage for anything except strict streaming or other ordered processing cases.
This would be the major sharp edge of Kafka that people don't understand and end up pidgeon-holed into patching themselves - strict cumulative acknowledgement. This leads to head of line blocking and the only solutions involve tracking acknowledgements yourself either not using consumer groups at all or layering some inefficient solution ontop of it that only updates the offset appropriately and properly skips processed messages when recovering/rebalancing.
An alternative this article misses is Apache Pulsar which is much better suited for the role of "general purpose messaging system" that can just as easily function as a worker queue where ordering isn't important and supports various models of ordered consumption depending on your requirements.
I was also going to suggest LogDevice but it appears it's been abandoned/archived sadly.
Regardless ignore fluff articles like this. Understand the caveats of the Kafka API before going all-in, if your problem fits it's very simple/cost effective solution so it's worth it if the constraints don't bother you and you aren't annoyed by Confluent's stewardship.
Otherwise I would preference Pulsar, it's the more flexible option that you are unlikely to grow out of. Even as you get big it's natively multi-tenant and geo-replicated etc.
by phoe-krk on 1/26/22, 11:15 AM
[0] https://blog.crunchydata.com/blog/message-queuing-using-nati...
by NiekvdMaas on 1/26/22, 11:04 AM
by pards on 1/26/22, 11:30 AM
4. Redelivery
Are messages redelivered if there's a failure in the consumer? This is critical to many systems using older messaging platforms like MQ.
by humpydumpy on 1/26/22, 11:23 AM