by nexo-v1 on 5/8/25, 1:23 PM with 263 comments
by asim on 5/8/25, 3:32 PM
Basically this. Microservices are a design pattern for organisations as opposed to technology. Sounds wrong but the technology change should follow the organisational breakout into multiple teams delivering separate products or features. And this isn't a first step. You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs e.g pdf creation is often a background task because of how long it takes to produce. Anyway after that you might end up with more services and then you have this sprawl of things where you start to think about standardisation, architecture patterns, etc. Before that it's a death sentence and if your business survives I'd argue it didn't because of microservices but inspite of them. The dev time lost in the beginning, say sub 200 engineers is significant.
by mindcrash on 5/8/25, 3:43 PM
They ignored me and went the microservices way.
Guess what?
2 years later the rebuild of the old codebase was done.
3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.
Moral of this short story: I can personally say everything this article says is pretty much true.
by jihadjihad on 5/8/25, 3:36 PM
> grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
> seem very confusing to grug
by didip on 5/8/25, 4:23 PM
It’s a tool to solve people issues. They can remove bureaucratic hurdles and allow devs to somewhat be autonomous again.
In a small startup, you really don’t gain much from them. Unless if the domain really necessitates them, eg. the company uses Elixir but all of the AI toolings are written in Python/Go.
by jerf on 5/8/25, 3:59 PM
I can't prove this scales up forever but I've been very happy with making sure that things are carefully abstracted out with dependency injection for anything that makes sense for it to be dependency-injected, and using module boundaries internally to a system as something very analogous to microservices, except that it doesn't go over a network. This goes especially well with using actors, even in a non-actor-focused language, because actors almost automatically have that clean boundary between them and the rest of the world, traversed by a clean concept of messages. This is sometimes called the Modular Monolith.
Done properly, should you later realize something needs to be a microservice, you get clean borders to cut along and clean places to deal with the consquences of turning it into a network service. It isn't perfect but it's a rather nice cost/benefit tradeoff. I've cut out, oh, 3 or 4 microservices out of monoliths in the past 5 years or so. It's not something I do everyday, and I'm not optimizing my modular monoliths for that purpose... I do modular monoliths because it is also just a good design methodology... but it is a nice bonus to harvest sometimes. It's one of the rare times when someone comes and quite reasonably expects that extracting something into a shared service will be months and you can be like "would you like a functioning prototype of it next week"?
by xcskier56 on 5/8/25, 4:32 PM
- You need to use a different language than your core application. E.g. we build Rails apps but need to use R for a data pipeline and 100% could not build this in ruby.
- You have 1 service that has vastly different scaling requirements that the rest of your stack. Then splitting that part off into it's own service can help
- You have a portion of your data set that has vastly different security and lifecycle requirements. E.g. you're getting healthcare data from medicare.
Outside of those, and maybe a few other edge cases, I see basically no reason why a small startup should ever choose microservices... you're just setting yourself up for more work for little to no gain.
by hosh on 5/8/25, 5:42 PM
> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains.
The BEAM language platform can cover scaling bottlenecks (at least within certain ranges of scale) and independently evolving domains, but has many of the advantages of working with a monolith when the team is small and searching for product-fit.
Like anything there are tradeoffs. The main one being that you'd have to learn how to write code with immutable data structures, and you have to be more thoughtful on how concurrent processes talk to each other, and what kind of failure modes you want to design into things. Many teams don't know how to hire for more Erlang or Elixir developers.
by siliconc0w on 5/8/25, 4:18 PM
by mikeocool on 5/8/25, 4:44 PM
Though, if you’re on a small team and really want to use micro services two places I have found it to be somewhat advantageous:
* wrapping particularly bad third party APIs or integrations — you’re already forced into having a network boundary, so adding a service at the boundary doesn’t increase complexity all that much. Basically this lets you isolate the big chunk of crappy code involved in integrating with the 3rd party, and giving it a nice API your monolith can interact with.
* wrapping particularly hairy dependencies — if you’ve got a dependency with a complex build process that slows down deployments or dev setup — or the dependency relies on something that conflicts with another dependency — wrapping it in its own service and giving it a nice API can be a good way to simplify things for the monolith.
by roguecoder on 5/8/25, 4:14 PM
You can get the architectural benefits of microservices by using message-passing-style Object-Oriented programming. It requires the discipline not to reach directly into the database, but assuming you just Don't Do That a well-encapsulated "object" is a microservice that runs in the same virtual machine as the other mircoservices.
Java is the most mainstream language that supports that: whenever you find yourself reaching for a microservice, instead create a module, namespace the database tables, and then expose only the smallest possible public interface to other modules. You can test them in isolation, monitor the connections between them, and bonus: it is trivial to deploy changes across multiple "services" at the same time.
by no_wizard on 5/8/25, 3:34 PM
For example, we have a authentication microservice at work. It makes sense that it lives outside of the main application, because its used in a multiple different contexts and the service boundary allows for it to be more responsive to changes, upgrades and security fixes than having it be part of the main app, and it deploys differently than the application. It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process. It has helped keep the code focused on only primary concerns.
That said, you can't apply any of these patterns blindly, as is so often the case. A good technical leader should push back when the benefits don't actually exist. The real issue is lack of experience making technical decisions on merits.
This includes high level executive leaders in the organization. At a startup especially, they are still often involved in many technical decisions. You'd be surprised (well maybe not!) how the highest leadership in a company at a startup will mandate things like using microservices and refuse to listen to anything running counter to such things.
by dkarl on 5/8/25, 4:32 PM
In monoliths, they generally don't.
There's no logical reason why you couldn't pay as much attention to decomposition and API design between the modules of a monolith. You could have the benefit of good design without all the architectural and operational challenges of microservices. Maybe some people succeed at this. But in practice I've never seen it. I've seen people handle the challenges of microservices successfully, and I've never seen a monolith that wasn't an incoherent mess internally.
This is just my experience, one person's observations offered for what they're worth.
In practice, in the context of microservices, I've seen an entire team work together for two weeks to break down a problem coherently, holding off on starting implementation because they knew the design wasn't good enough and it was worth the time to get it right. I've seen people escalate issues with others' designs because they saw a risk and wanted to address it.
In the context of monoliths, I've never seen someone delay implementation so much as a day because they knew the design was half-baked. I rarely see anyone ask for design feedback or design anything as a team until they've screwed something up so badly that it can't be avoided. People sometimes make major design decisions in a split second while coding. What kind of self-respecting senior developer would spend a week getting input on an internal code API before starting to implement? People sometimes aren't even aware that the code they wrote that morning has implications for code that will be written later.
Theoretically this is okay because refactoring is easy in a monolith. Right? ... It is, right?
I'm basically sold on microservices because I know how to get developers to take design seriously when it's a bunch of services talking to each other via REST or grpc, and I don't know how to get them to take the internal design of a monolith seriously.
by bitcurious on 5/8/25, 4:51 PM
1. You get to minimize devops/security/admin work. Really a consequences of using serverless tooling, but you land on a something like a microservices architecture if you do.
2. You get can break out work temporally. This is the big one - when you're a small team supporting multiple products, you often don't have continuity of work. You have one project for a few months, completely unrelated product for another few months. Microservice architectures are easier to build and maintain in that environment.
by Ensorceled on 5/8/25, 4:15 PM
In the Q&A after ward, another local startup CTO asked about problems their company was having with their microservices.
The successful CTO asked two questions: "How big is your microservices tooling team?" and "How big is your Dev Ops Team?"
His point was, if you're development team is not big enough to afford dedicated teams to tooling and dev ops, it's not big enough to afford microservices.
by monero-xmr on 5/8/25, 4:01 PM
by bob1029 on 5/8/25, 4:34 PM
One should consider if they can dive even deeper into the monolithic rabbit hole. For example, do you really need an external hosted SQL provider, or could you embed SQLite?
From a latency & physics perspective, monolith wins every time. Making a call across the network might as well take an eternity by comparison to a local method. Arguments can be made that the latency can be "hidden", but this is generally only true for the more trivial kinds of problems. For many practical businesses, you are typically in a strictly serialized domain which means that you are going to be forced to endure every microsecond of delay. Assuming that a transaction was not in conflict doesn't work at the bank. You need to be sure every time before the caller is allowed to proceed.
The tighter the latency domain, the less you need to think about performance. Things can be so fast by default that you can actually focus on building what the customer is paying for. You stop thinking about the sizes of VMs, who's got the cheapest compute per dollar and other distracting crap.
by addisonj on 5/8/25, 5:26 PM
What this article doesn't cover... and where a good chunk of my career has been, is when companies are driven to break out into services, which might be due to scale, team size, or becoming a multi-product company. Whatever the reason, it can kill velocity during the transition. In my experience, if this is being done to support becoming multi-product, this loss in velocity comes at the worst time and can sink even very component teams.
As an industry, the gap between what makes sense for startups and what makes sense for scale can be a huge chasm. To be clear, I don't think it means you should invest in micro-services on the off-chance you need to hit scale (which I think is where many convince themselves of) nor does it mean that you should always head to microservices even when you hit those forcing functions (scaling monoliths is possible!)
That said, modularity, flexibility, and easy evolution are super important as companies grow and I do really think the next generation of tools and platforms will be benefit to better suiting themselves to evolution and flexibility than they do today. One idea I have thought for some time is platforms that "feel" like a monolith, but are 1) more concrete in building firmer interfaces between subsystems and 2) have flexibility in how calls happen between these interfaces (imagine being able to run a subsystem embedded or transparently to move calls over an RPC interface). Certainly that is "possible" with well structured code in platforms today... but it isn't always natural.
I am not sure the answer, but I really hope the next 10 years of my career has less massive chasms crossed via huge multi-year painful efforts and more cautious, careful evolution enabled by well considered tool and platforms.
by gleenn on 5/8/25, 3:53 PM
by 4ndrewl on 5/8/25, 6:50 PM
Microservice architecture is a deployment strategy.
If you have a problem with deployments (eg large numbers of teams, perhaps some external suppliers running at different cadences, or with different tech stacks) the microservices are a fine solution to this.
by andreygrehov on 5/9/25, 2:13 AM
by alaithea on 5/8/25, 4:40 PM
by rglover on 5/8/25, 5:45 PM
1. Start with a monolith
2. If necessary, set up a job server that can be vertically/horizontally scaled and then give it a private API, or, give it access to the same database as the monolith.
For an overwhelming number of situations, this works great. You separate the heavy compute workloads from the customer-facing CRUD app and can scale the two independent of one another.
The whole microservices thing always seemed like an attempt by cloud providers to just trick you into using their services. The first time I ever played with serverless/lambda, I had a visceral reaction to the deployment process and knew it would end in tragedy.
by parpfish on 5/8/25, 4:33 PM
My current job insists that they have a “simple monolith” because all the code is in a single repo. But that repo has code to build dozens of python packages and docker containers. Tons of deploy scripts. Different teams/employees are isolated to particular parts of the codebase.
It feels a lot like microservices, but I don’t know what the defining feature of microservices is supposed to be
by phodge on 5/8/25, 7:08 PM
For example you may be forced to split out some components into separate services because they require a different technology stack to the monolith, but that doesn't strictly require a separate source code repository.
by johncoltrane on 5/8/25, 4:14 PM
by karmakaze on 5/8/25, 5:10 PM
- Use one-way async messaging. Making a UserService that everything else uses synchronously via RPC/REST/whatever is a very bad idea and an even worse time. You'll struggle for even 2-nines of overall system uptime (because they don't average, they multiply down).
- 'Bounded context' is the most important aspect of microservices to get right. Don't make <noun>-services. You can make a UserManagementService that has canonical information about users. That information is propagated to other services which can work independently each using the eventually consistent information they need about users.
There's other dumb things that people do like sharing a database instance for multiple 'micro'-services and not even having separately accessible schemas. In the end if done well, each microservice is small and pleasant to work on, with coordination between them being the challenging part both technically and humanly.
by bossyTeacher on 5/8/25, 3:52 PM
by jmyeet on 5/8/25, 4:12 PM
Every service boundary you have to cross is a point of friction and a potential source of bugs and issues so by having more microservices you just have more than go wrong, by definition.
A service needs to maintain an interface for compatibility reasons. Each microservice needs to do that and do integration testing with every service they interact with. If you can't deploy a microservice without also updating all its dependencies then you don't have an independent service at all. You just have a more complicated deployment with more bugs.
The real problem you're trying to solve is deployment. If a given service takes 10 minutes to restart, then you have a problem. Ideally that should be seconds. But more ideally, you should be able to drain traffic from it then replace it however long it takes and then slowly roll it out checking for canary changes. Even more ideally, this should be largely automated.
Another factor: build times. If a service takes an hour to compile, that's going to be a huge impediment to development speed. What you need is a build system that caches hermetic artifacts so this rarely happens.
With all that above, you end up with what Google has: distributed builds, automated deployment and large, monolithic services.
by bilbo-b-baggins on 5/9/25, 12:09 AM
Do you have standardization and reuse of things like linting, formatting, ci/cd pipelines, version stability, deployment patterns, monitoring integrations, integration and end to end testing, etc.? If you’re doing those things bespoke per repo/deployment, or if you don’t have roles dedicated to the support and maintenance, you’re not going to have a good time with microservices.
Do you have actual issues of scale where API hot paths are dominating your runtime? Are they horizontally scalable or bottlenecked on downstream dependencies (databases)? You can’t solve scale issues by just spinning microservices willy nilly (e.g. by domain topic).
Is your development environment sophisticated enough to actually run a stack? Or do you have supporting clusters that allow for local binding of services? If not, you’re going to struggle with microservice local development, and pay for a slow QA in staging.
Does all that require supporting roles and expertise? Yeap. If you’re a 5 person startup you probably don’t have that. If you’re a 150 person startup, you might.
I’ve seen Java monoliths with 11M lines of code that represented 80% of the production cost to run and the gradual break out of targeted APIs to microservices halved that while the monolith still lived on. I’ve seen queued microservice architectures ripping through tens of millions of events/requests a minute with less than a thousand pods (across the services) and a fraction of the resources of the monolith.
Ultimately there’s no free lunch in software and you shouldn’t pursue any design without understanding the tradeoffs.
by gad21034 on 5/9/25, 1:50 AM
- monolith to start with very little time spent on code architecture patterns like DDD (although these days with llms I would say go for it and use DDD patterns in your prompts)
- optimize code cleanliness by adhering to better code architecture patterns
- when it feels like you are doing weird things to scale a process on the monolith (scheduling background tasks where you could break out a pubsub to function service and defend your uptime while coordinating on a shared DB), drop any religion around no micro services.
by CharlieDigital on 5/8/25, 4:50 PM
> ... conflate logical boundaries (how code is written) with physical boundaries (how code is deployed)
It's very easy to read and digest and I think it's a great paper that makes the case for building "modular monoliths".I think many teams do not have a practical guide on how to achieve this. Certainly, Google's solution in this case is far too complex for most teams. But many teams can achieve the 5 core benefits that they mentioned with a simpler setup. I wrote a about this in a blog post A Practical Guide to Modular Monoliths with .NET[1] with a GitHub repo showing how to achieve this[2] as well as a video walkthrough[3]
This approach has proven (for me) to be easy to implement, package, deploy, and manage and is particularly good for startups with all of the qualities mentioned in the Google paper without much complexity added.
[0] https://dl.acm.org/doi/pdf/10.1145/3593856.3595909
[1] https://chrlschn.dev/blog/2024/01/a-practical-guide-to-modul...
by PathOfEclipse on 5/8/25, 6:48 PM
The article does mention "invest in modularity", but to be honest, if you're in frantic startup mode dumping code into a monolith, you're probably not caring about modularity either.
Lastly, I would imagine it's easier to start with microservices, or multiple mid-sized services if you're relying on advanced cloud infra like AWS, but that has its own costs and downsides.
by duxup on 5/8/25, 4:28 PM
For large orgs where each service has a dedicated team it starts to make sense... but then it becomes clear that microservices are an organizational solution.
by haburka on 5/8/25, 10:35 PM
by utmb748 on 5/8/25, 4:42 PM
CI/CD - infra can be as code, shared across, K8s port-forward for local development, better resource utilization, multiple envs end so on, available tooling, if setup correctly, usually keeps working.
Not mentioned plus, usually smaller merge requests, feature can be split and better estimated, less conflicts during work or testing... possibility to share in packages.
Also if there are no tests, doesnt matter if its monorepo or MS, you can break easily or spend more time.
You should afford tests and documentation, keep working on tech debt.
Next common issue I see, too big tech stack cos something is popular.
by Cthulhu_ on 5/8/25, 3:47 PM
The other one was a microservice architecture in front of the real problem, a Java backend service that hid the real real problem, one or more mainframes. But the consultants got to play in their microservices garden, which was mostly just a REST API in front of a Postgres database that would store blobs of JSON. And of course these microservices would end up needing to talk to each other through REST/JSON.
I've filed this article in my "microservices beef" bookmarks folder if I ever end up in another company that tries to do microservices. Of course, that industry has since moved on to lambdas, which is microservices on steroids.
by root_axis on 5/8/25, 4:20 PM
by vjvjvjvjghv on 5/8/25, 3:43 PM
by metalrain on 5/8/25, 5:34 PM
by abhisek on 5/8/25, 3:53 PM
Few cases where microservices makes sense probably when we have a small and well bounded use-case like webhooks management, notifications or may be read scaling on some master dataset
by bzmrgonz on 5/8/25, 4:36 PM
by lenerdenator on 5/8/25, 4:33 PM
by mvdtnz on 5/8/25, 8:21 PM
(It's no coincidence that this company was largely loaded up with ex-Googlers in the early days).
by cedws on 5/9/25, 12:20 PM
by ngrilly on 5/8/25, 7:28 PM
by yawnxyz on 5/8/25, 4:41 PM
Using them makes it easy to build endpoints for things like WhatsApp and other integrations
by stevebmark on 5/8/25, 5:36 PM
Love this quote, it should be a poster on the wall of any dev who pushes Domain Driven Design on an engineering team.
by nottorp on 5/8/25, 5:01 PM
The catch is to keep them all in mind and use them in moderation.
Like everything else in life.
by sisve on 5/8/25, 5:00 PM
Context and nuances
by mountainriver on 5/8/25, 3:52 PM
Just use regular sized services
by sergiotapia on 5/8/25, 4:27 PM
it takes skill and taste to use only enough of each. unfortunately a lot of VC $$$ has been spent by cloud companies and a whole generation or two of devs are permasoiled by the micro$ervice bug.
don't do it gents. monolith, until you literally cannot go further, then potentially, maybe, reluctantly, spin out a separate service to relieve some pressure.
by Havoc on 5/8/25, 4:34 PM
Stuff like k8s works fine as docker delivery vehicle
by goji_berries on 5/8/25, 4:35 PM
by mgaunard on 5/8/25, 4:30 PM
by nicman23 on 5/8/25, 4:49 PM
by demarq on 5/8/25, 4:34 PM
by httpz on 5/8/25, 6:14 PM
by swisniewski on 5/8/25, 6:35 PM
I think this is the wrong way to frame it. The advice should be "just do the scrappy thing".
This distinction is important. Sometimes, creating a separate service is the scrappy thing to do, sometimes creating a monolith is. Sometimes not creating anything is the way to go.
Let's consider a simple example: adding a queue poller. Let's say you need to add some kind of asynchronous processing to your system. Maybe you need to upload data from customer S3 buckets, or you need to send emails or notifications, or some other thing you need to "process offline".
You could add this to your monolith, by adding some sort of background pollers that read an SQS queue, or a table in your database, then do something.
But that's actually pretty complicated, because now you have to worry about how much capacity to allocate to processing your service API and how much capacity to allocate to your pollers, and you have scale them all up at the same time. If you need more polling, you need more api servers. It become a giant pain really quickly.
It's much simpler to just separate them then it is to try to figure out how to jam them together.
Even better though, is to not write a queue poller at all. You should just write a Lambada and point it at your queue.
This is particularly true if you are me, because I wrote the Lambda Queue Poller, it works great, and I have no real reason to want to write it a second time. And I don't even have to maintain it anymore because I haven't worked at AWS since 2016. You should do this to, because my poller is pretty good, and you don't need to write one, and some other schmuck is on the hook for on-call.
Also you don't really need to think about how to scale at all, because Lambda will do it for you.
Sure, at some point, using Lambda will be less cost effective than standing up your own infra, but you can worry about that much, much, much later. And chances are there will be other growth opportunities that are much more lucrative than optimizing your computer bill.
There are other reasons why it might be simpler to split things. Putting your control plane and your data plane together just seems like a head ache waiting to happen.
If you have things that happen every now and then ("CreateUser", "CreateAccount", etc) and things that happen all the time ("CaptureCustomerClick", or "UpdateDoorDashDriverLocation", etc) you probably want to separate those. Trying to keep them together will just end up causing your pain.
I do agree, however, that having a "Users" service and an "AccountService" and a "FooService" and "BarService" or whatever kind of domain driven nonsense you can think of is a bad idea.
Those things are likely to cause pain and high change correlations, and lead to a distributed monolith.
I think the advice shouldn't be "Use a Monolith", but instead should be "Be Scrappy". You shouldn't create services without good reason (and "domain driven design" is not a good reason). But you also shouldn't "jam things together into a monolith" when there's a good reason not to. N sets of crud objects that are highly related to each other and change in correlated ways don't belong in different services. But things that work fundamentally differently (a queue poller, a control-plane crud system, the graph layer for grocery delivery, an llm, a relational database) should be in different services.
This should also be coupled with "don't deploy stuff you don't need". Managing your own database is waaaaaaay more work that just using Dynamo DB or DSQL or Big Table or whatever....
So, "don't use domain driven design" and "don't create services you don't need" is great advice. But "create a monolith" is not really the right advice.
by mannyv on 5/8/25, 3:38 PM
If you don't understand the benefit of xyz then don't do it.
Our microservice implementation is great. It scales with no maintenance, and when you have three people that makes a difference.