by qianli_cs on 3/18/25, 1:04 PM with 86 comments
by davedx on 3/22/25, 2:59 PM
Most apps I’ve worked on could have been a monolith on postgres but they never ever are as soon as I’m not the sole engineer.
by localghost3000 on 3/22/25, 3:58 PM
I’ve built a bunch of distributed architectures. In every case I did, we would have been better served with a monolith architecture and a single relational DB like Postgres. In fact I’ve only worked on one system that had the kind of scale that would justify the additional complexity of a distributed architecture. Ironically that system was a monolith with Postgres.
by geophile on 3/22/25, 3:39 PM
So something goes wrong, and you need to back out an update to one of your microservices. But that back-out attempt goes wrong. Or happens after real-world actions have been based on that update you need to back out. Or the problem that caused a backout was transient, everything turns out to be fine, but now your backout is making its way across the microservices. Backout the backout? What if that goes wrong? The "or"s never end.
Just use a centralized relational database, use transactions, and be done with it. People not understanding what can go wrong, and how RDB transactions can deal with a vast subset of those problems -- that's like the 21st century version of how to safely use memory in C.
Yes, of course, centralized RDBs with transactions are sometimes the wrong answer, due to scale, or genuinely non-atomic update requirements, or transactions spanning multiple existing systems. But I have the sense that they are often rejected for nonsensical reasons, or not even considered at all.
by ptx on 3/22/25, 3:46 PM
This doesn't seem like a correct description of events. Distributed systems existed in the 90s and there was e.g. Microsoft Transaction Server [0] which was intended to do exactly this. It's not a new problem.
And the article concludes:
> This manages the complexity of a distributed world, bringing the complexity of a microservice RPC call or third-party API call closer to that of a regular function call.
Ah, just like DCOM [1] then, just like in the 90s.
[0] https://en.wikipedia.org/wiki/Microsoft_Transaction_Server
[1] https://en.wikipedia.org/wiki/Distributed_Component_Object_M...
by mmastrac on 3/22/25, 1:45 PM
You still have four layers, it's just that one is hidden with annotations.
by dventimi on 3/22/25, 2:27 PM
Immediately, I see problems. Martin Fowler's "Patterns of Enterprise Application Architecture" was first published in 2002, a year that I think most people will agree was not in "the 90's." Also, was that the motivation? Are we sure? Who had that motivation? Were there any other motivations at play?
by bazizbaziz on 3/22/25, 4:40 PM
IMO the next big improvement in this space is improving the authoring experience. In short, when it comes to workflows, we are basically still writing assembly code.
Writing workflows today is done in either a totally separate language (StepFunctions), function-level annotations (Temporal, DBOS, etc), or event/reconciliation loops that read state from the DB/queue. In all cases, devs must manually determine when state should be written back to the persistence layer. This adds a level of complexity most devs aren't used to and shouldn't have to reason about.
Personally, I think the ideal here is writing code in any structure the language supports, and having the language runtime automatically persist program state at appropriate times. The runtime should understand when persistence is needed (i.e. which API calls are idempotent and for how long) and commit the intermediate state accordingly.
by itsthecourier on 3/22/25, 2:07 PM
got an overly complex, over 30 micro services architecture, over usd20k in monthly cloud fees.
rewrote the thing into a monolith in 6 months. reduced development team in half, costs of servers by 80-90%, latency by over 60%
newer is not better. each micro service must be born from a real necessity out of usage stats, server stats, cost analisis. not by default following tutorials.
by recursivedoubts on 3/22/25, 1:38 PM
by politelemon on 3/22/25, 1:54 PM
by recroad on 3/22/25, 2:09 PM
by xyst on 3/22/25, 2:46 PM
by whilenot-dev on 3/22/25, 6:06 PM
Isn't that a bit... unoptimized? The orchestrator domain doesn't seem to be demanding on compute, so why aren't they making proper use of asyncio here in the first place? And why aren't they outsourcing their runtime to an independent process?
EDIT:
So "To manage this complexity, we believe that any good solution to the orchestration problem should combine the orchestration and application tiers." (from the article) means that your application runtime will also become the orchestrator for its own workflow steps. Is that a good solution?
EDIT2:
Are they effectively just shifting any uptime responsibility (delivery guarantees included) to the application process?
[0]: https://github.com/dbos-inc/dbos-transact-py/tree/a3bb7cb6dd...
[1]: https://docs.dbos.dev/python/reference/configuration#databas...
[2]: https://github.com/dbos-inc/dbos-transact-py/blob/a3bb7cb6dd...
[3]: https://github.com/dbos-inc/dbos-transact-py/blob/a3bb7cb6dd...
by ape4 on 3/22/25, 2:40 PM
by mrkeen on 3/22/25, 3:10 PM
program =
email all customers
failure =
throttled by mailchimp
by gizzlon on 3/22/25, 6:04 PM
Reads like the set up for a sales pitch, which came at the end
by turnsout on 3/22/25, 1:56 PM
by Trasmatta on 3/22/25, 1:44 PM
*citation needed
We continue to make things much more complex than they need to be. Even better when NON "enterprise" applications also buy into the insane complexity because they feel like they have to (but they have nowhere near the resources to manage that complexity).
by tnvmadhav on 3/22/25, 3:52 PM
by Toine on 3/22/25, 5:51 PM
by dartos on 3/22/25, 1:44 PM