by PretzelFisch on 1/3/23, 12:35 PM with 660 comments
by barrkel on 1/3/23, 1:28 PM
There's two technical problems that microservices purport to solve: modularization (separation of concerns, hiding implementation, document interface and all that good stuff) and scalability (being able to increase the amount of compute, memory and IO to the specific modules that need it).
The first problem, modules, can be solved at the language level. Modules can do that job, and that's the point of this blog post.
The second problem, scalability, is harder to solve at the language level in most languages outside those designed to be run in a distributed environment. But most people need it a lot less than they think. Normally the database is your bottleneck and if you keep your application server stateless, you can just run lots of them; the database can eventually be a bottleneck, but you can scale up databases a lot.
The real reason that microservices may make sense is because they keep people honest around module boundaries. They make it much harder to retain access to persistent in-memory state, harder to navigate object graphs to take dependencies on things they shouldn't, harder to create PRs with complex changes on either side of a module boundary without a conversation about designing for change and future proofing. Code ownership by teams is something you need as an organization scales, if only to reduce the amount of context switching that developers need to do if treated as fully fungible; owning a service is more defensible than owning a module, since the team will own release schedules and quality gating.
I'm not so positive on every microservice maintaining its own copy of state, potentially with its own separate data store. I think that usually adds more ongoing complexity in synchronization than it saves by isolating schemas. A better rule is for one service to own writes for a table, and other services can only read that table, and maybe even then not all columns or all non-owned tables. Problems with state synchronization are one of the most common failure modes in distributed applications, where queues get backed up, retries of "bad" events cause blockages and so on.
by c-fe on 1/3/23, 1:28 PM
This experience has strongly impacted my view of microservices and for all personal projects I will develop in the future I will stick with a monolith until much later instead of starting with microservices.
by gunnarmorling on 1/3/23, 2:25 PM
* they force alignment on one language or at least runtime
* they force alignment of dependencies and their versions (yes, you can have different versions e.g. via Java classloaders, but that's getting tricky quickly, you can't share them across module boundaries, etc.)
* they can require lots of RAM if you have many modules with many classes (semi-related fun fact: I remember a situation where we hit the maximum number of class files a JAR could have you loaded into WebLogic)
* they can be slow to start (again, classloading takes time)
* they may be limiting in terms of technology choice (you probably don't want ot have connections to an RDBMS and Neo4j and MongoDB in one process)
* they don't provide resource isolation between components: a busy loop in one module eating up lots of CPU? Bad luck for other modules.
* they take long to rebuild an redeploy, unless you apply a large degree of discipline and engineering excellence to only rebuild changed modules while making sure no API contracts are broken
* they can be hard to test (how does DB set-up of that other team's component work again?)
I am not saying that most of these issues cannot be overcome; to the contrary, I would love to see monoliths being built in a way where these problems don't exist. I've worked on massive monoliths which were extremely well modularized. Those practical issues above were what was killing productivity and developer joy in these contexts.
Let's not pretend large monoliths don't pose specific challenges and folks moved to microservices for the last 15 years without good reason.
by surprisetalk on 1/3/23, 12:52 PM
It's okay to not organize your code. It's okay to have files with 10,000 lines. It's okay not to put "business logic" in a special place. It's okay to make merge conflicts.
The overhead devs spend worrying about code organization may vastly exceed the amount of time floundering with messy programs.
Microservices aren't free, and neither are modules.
[1] Jonathan Blow rant: https://www.youtube.com/watch?v=5Nc68IdNKdg&t=364s
[2] Jon Carmack rant: http://number-none.com/blow/john_carmack_on_inlined_code.htm...
by Pamar on 1/3/23, 1:15 PM
I.e. the idea that each microservice has direct access to its own, dedicated, maybe duplicated storage schema/instance (or if it needs to know, for example, the country name for ISO code "UK" it is supposed to ... invoke another microservice that will provide the answer for this).
I always worked in pretty boring stuff like managing reservations for cruise ships, or planning production for the next six weeks in an automotive plant.
The idea of having a federation of services/modules constantly cross-calling each other in order to just write "Your cruise departs from Genova (Italy) at 12:15 on May 24th, 2023" is not really a good fit for this kind of problems.
Maybe it is time to accept that not everyone has to work on the next version of Instagram and that Microservices are probably a good strategy... for a not really big subset of the problems we use computers for?
by leidenfrost on 1/3/23, 1:52 PM
And the virtue of microservices is that they create hard boundaries no matter your skill and seniority level. Any unsupervised junior will probably dissolve the module boundaries. But they can't simply dissolve the hard boundary of having a service in another location.
by Guid_NewGuid on 1/3/23, 3:14 PM
Microservices are a solution to a problem. TDD is a solution to a problem, the same problem. Both are solutions that themselves create more, and worse, problems. Thanks to the hype driven nature of software development the blast radius of these 'solutions' and their associated problems expands far beyond the people afflicted by the original problem.
That problem? Not using statically typed languages.
TDD attempts to reconstruct a compiler, poorly. And Microservices tries to reconstruct encapsulation and code structure, poorly. Maybe if you don't use a language which gets hard to reason about beyond a few hundred lines you won't need to keep teams and codebases below a certain size. Maybe if you can't do all kinds of dynamic nonsense with no guardrails you don't have to worry so much about all that code being in the same place. The emperor has no clothes, he never had.
Edit: to reduce the flame bait nature of the above a bit. Not in all cases, I'm sure there are a very few places and scales where microservices make sense. If you have one of those and you used this pattern correctly, that's great.
And splitting out components as services is not always bad and can make a lot of sense. It's the "micro" part of "microservices" that marks out this dreadful hype/trend pattern I object to. It's clearly a horrible hack to paper over the way dynamically typed codebases become much harder to reason about and maintain at scale. Adding a bunch of much harder problems (distributed transactions, networks, retries, distributed state, etc) in order to preserve teams sanity instead of just using tooling that can enforce some sense of order.
by lucasyvas on 1/3/23, 3:20 PM
Good LUCK getting that property with a monolithic or modular system. QE can never be certain (and let's be honest, they should be skeptical) that something modified in the same codebase as something else does not directly break another unrelated feature entirely. It makes their life very difficult when they can't safely draw lines.
Two different "modules" sharing even a database when they have disparate concerns is just waiting to break.
There's a lot of articles lately dumping on microservices and they're all antiquated. News flash: there is no universal pattern that wins all the time.
Sometimes a monolith is better than modules is better than microservices. If you can't tell which is better or you are convinced one is always better, the problem is with you, not the pattern.
Microservices net you a lot of advantages at the expense of way higher operational complexity. If you don't think that trade off is worth it (totally fair), don't use them.
Since we are talking about middleground, one I'd like to see one day is a deploy that puts all services in one "pod", so they all talk over Unix socket and remove the network boundary. This allows you to have one deploy config, specify each version of each service separately and therefore deploy whenever you want. It doesn't have the scalability part as much, but you could add the network boundary later.
by PathOfEclipse on 1/3/23, 2:20 PM
1. Deployment. Being able to deploy code rapidly and independently is lost when everything ships as a monolith.
2. Isolation. My process is GC-spiraling. Which team's code change is responsible for it? Since the process is shared across teams, a perf bug from one team now impacts many teams.
3. Operational complexity. People working on the system have to deal with the the fact that many teams' modules are running in the same service. Debugging and troubleshooting gets harder. Logging and telemetry also tends to get more complicated.
4. Dependency coupling. Everyone has to use the exact same versions of everything and everyone has to upgrade in lockstep. You can work around this with module systems that allow dependency isolation, but IMO this tends to lead to its own complexity issues that make it not worthwhile.
5. Module API boundaries. In my experience, developers have an easier time handling service APIs than library APIs. The API surface area is smaller, and it's more obvious that you need to handle backwards compatibility and how. There is also less opportunity to "cheat", or break encapsulation, with service boundaries compared to library boundaries.
In practice, for dividing up code, libraries and modules are the less popular solution for server-side programming compared to services for good reasons. The downsides are not worth the upsides in most cases!
by dist1ll on 1/3/23, 1:37 PM
Why do we care about network latency, when we JUST established that microservices are about scaling large development teams? I have no problem with hackers ranting about slow, bloated and messy software architecture...but this is not the focus of discussion as presented in the article.
And then this conclusion:
> The key is to establish that common architectural backplane with well-understood integration and communication conventions, whatever you want or need it to be.
...so, like gRPC over HTTP? Last time I checked, gRPC is pretty well understood from an integration perspective. Much better than Enterprise Java Beans from the last century. Isn't this ironic? And where are the performance considerations for this backplane? Didn't we criticize microservices before because they have substandard performance?
by ilitirit on 1/3/23, 2:00 PM
Take for example Gartner's definition:
> A microservice is a service-oriented application component that is tightly scoped, strongly encapsulated, loosely coupled, independently deployable and independently scalable.
That's not too controversial. But... as a team why and when would you want to implement something like this? Again, let's ask Gartner. Here are excerpts from "Should your Team be using Microservice Architectures?":
> In fact, if you aren’t trying to implement a continuous delivery practice, you are better off using a more coarse-grained architectural model — what Gartner calls “Mesh App and Service Architecture” and “miniservices.”
> If your software engineering team has already adopted miniservices and agile DevOps and continuous delivery practices, but you still aren’t able to achieve your software engineering cadence goals, then it may be time to adopt a microservices architecture.
For Gartner, the strength of Microservice Architecture lies in delivery cadence (and it shouldn't even be the first thing you look at to achieve this). For another institution it could be something else. My point is that when people talk about things like Microservices they are often at cross-purposes.
by mkl95 on 1/3/23, 1:19 PM
Another way to put it is that teams that share parts of the same codebase introduce low level bugs that affect each other, and most organizations are clueless about preventing it and in some cases do not even detect it.
by ChrisMarshallNY on 1/3/23, 1:24 PM
Part of the reason is that anything that leaves the application package increases the error potential exponentially.
Also, modules "bottleneck" functionality, and allow me to concentrate work into one portion of the codebase.
I'm in the middle of "modularizing" the app I've been developing for some time.
I found that a great deal of functionality was spread throughout the app, as it had been added "incrementally," as we encountered issues and limitations.
The new module refines all that functionality into one discrete codebase. This allows us to be super-flexible with the actual UI (the module is basically the app "engine").
We have a designer, proposing a UX, and I found myself saying "no" too often. These "nos" came from the limitations of the app structure.
I don't like saying "no," but I won't make promises that I can't keep.
BTW: The module encompasses interactions with two different servers. It's just that I wrote those servers.
by college_physics on 1/3/23, 5:18 PM
Most likely the question is not well defined in the first place.
by jcadam on 1/3/23, 5:08 PM
by hinkley on 1/3/23, 5:51 PM
> ... the Fallacies of Distributed Computing.
I feel like I’m taking crazy pills, but at least I’m not the only one. I think the only reason this fallacy has survived so long this cycle is because we currently have a generation of network cards that is so fast that processes can’t keep up with them. Which is an architectural problem, possibly at the application layer, the kernel layer, or the motherboard design. Or maybe all three. When that gets fixed there will be a million consultants to show the way to migrate off of microservices because of the 8 Fallacies.
by matt_s on 1/3/23, 1:35 PM
When a critical vulnerability comes out for whatever language you are using, you now have to patch, test and deploy X apps/repos vs much fewer if they are consolidated repositories written modularly. Same can be said for library/framework upgrades, breaking changes in versions, deprecated features, taking advantage of new features, etc.
Keeping the definition of runtimes as modular as the code can be instrumental in keeping a bunch of related modules/features in one application/repository. One way is with k8s deployments and init params where the app starts specific modules which then lends itself to be scaled differently. I'm sure there are home-grown ways to do this too without k8s.
by vinay_ys on 1/4/23, 1:40 PM
by kovac on 1/4/23, 7:02 AM
I'm glad that this post was written so that we can look at widely accepted ideas a little more critically.
by tail_exchange on 1/3/23, 5:40 PM
Monoliths are way slower to deploy than microservices, and when you have hundreds or thousands of changes going out every day, this means lots of changes being bundled together in the same deployment, and as a consequence, lots of bugs. Having to stop a deployment and roll it back every time a defect is sent to production would just make the whoe thing completely undeployable.
Microservices have some additional operational overhead, but they do allow much faster deployments and rollbacks without affecting the whole org.
Maybe I am biased, but I would love an explanation from the monolith-evangelist crowd on how to keep a monolith able to deploy multiple times a day and capable of rolling back changes when people are pushing hundreds of PRs every day.
by LeZuse on 1/4/23, 12:24 PM
People just read up on whatever seems to be the newest, coolest thing. The issue is that MS articles are usually coming from FAANG/ex-FANNG. These companies are solving problems that 99% of others do not.
As engineers we should be looking for the most effective solutions to a given business problem. Sadly, I see engineers with senior/staff titles just throwing cool tech terms/libs around. Boring tech club ftw
by danielovichdk on 1/3/23, 7:33 PM
The article gets it right in ny opinion.
1. It has a lot to do with organisational constraints.
2. It has a lot to do with service boundaries. If services are chatty they should be coupled.
3. What a service does must be specified in regards to which data it takes in and what data it outputs. This data can and should be events.
4. Services should rely and work together based on messaging in terms of queues, topics, streams etc.
5. Services are often data enrichment services where one service enrich some data based on an event/data.
6. You never test more than one service at a time.
7. Services should not share code which is vibrant or short lived in terms of being updated frequently.
8. Conquer and divide. Start by developing a small monolith for what you expect could be multiple service. Then divide the code. And divide it so each coming service own its own implementation as per not sharing code between them.
9. IaaS is important. You should be able to push deploy and a service is setup with all of its infrastructure dependencies.
10. Domain boundaries are important. Structure teams around them based in a certain capability. E.g. Customers, Bookings, Invoicing. Each team owns a capability and its underlying services.
11. Make it possible for other teams to read all your data. They might need it for something they are solving.
12. Don't use kubernetes or any other orchestra unless you can't it what you want with cloud provider paas. Kubernetes is a beast and will put you to the test.
13. Services will not solve your problems if you do not understand how things communicate, fail and recovers.
14. Everything is eventually consistent. The mindset around that will take time to cope with.
A lot more...
by iddan on 1/3/23, 2:38 PM
by onehair on 1/3/23, 2:13 PM
Well, in our +20 product teams with all serving different workflows for 3 different user types, the separate micro services are doing wonders for us for exactly the things you've listed.
My comment should just stop here to be honest.
by revskill on 1/3/23, 2:05 PM
Microservice and modularity is orthogonal, it's not the same.
Modularity is related to business concept, microservice is related to infrastructure concept.
For example, i could have a module A which is deployed into microservide A1 and A2. In this case, A is almost abstract concept.
And of course, i could deploy all modules A, B, C using 1 big service (monothlic).
Moreover, i could share one microservice X for all modules.
All confusion from microservice, is made from the misconception that microservice = module.
Worse, most of "expert advice" which i've learnt actually relate Domain Driven Design to Microservice. They're not related, again.
Microservice to me, is to scale. Scale the infrastructure. Scale the team (management concept).
by mypalmike on 1/3/23, 3:32 PM
by ChicagoDave on 1/3/23, 1:19 PM
by wodenokoto on 1/3/23, 1:06 PM
Then you can have a mono repo that deploys to multiple micro services / cloud functions / lambdas as needed depending on code changes and programmers don’t have to worry about RPC or json when communicating between modules and can just call the damn function normally.
by Zvez on 1/3/23, 6:21 PM
But author clearly avoided the real reasons why you actually need to split stuff into separate services: 1. Some processes shouldn't be mixed in the same runtime. Simple example batch/streaming vs 'realtime'. Or important and not important. 2. Some things need different stack, runtimes, frameworks. And is much easier to separate them instead of trying to make them coexist.
And regarding 'it was already in Simpsons' argument, I don't think it should even be considered as argument. If you are old enough to remember EJB, you don't need to be explained why it was a bad idea from the start. Why services built on EJB were never scalable or maintainable. So even if EJB claimed to cover the same features as microservices right now, I'm pretty sure EJB won't be a framework of choice for anybody now.
Obviously considering microservices as the only _right_ solution is stupid. But same goes for pretty much any technology out there.
by fennecfoxy on 1/3/23, 5:40 PM
If I run a monolith and one least-used module leaks memory real hard, the entire process crashes even though the most-used/more important modules were fine.
Of course it's possible to run modularised code such that they're sandboxed/resources are controlled - but at that point it's like...isn't this all the same concept? Managed monolith with modules vs microservices on something like k8s.
I feel like rather than microservices or modules or whatever we need a concept for a sliding context, from one function->one group of functions->one feature->one service->dependent services->all services->etc.
With an architecture like that it would surely be possible to run each higher tier of context in any way we wanted; as a monolith, containerised, as lambdas. And working on it would be a matter of isolating yourself to the context required to get your current task done.
by ftlio on 1/3/23, 5:32 PM
It’s old. The examples are in PL/I. But his framework for identifying “Functional Strength” and Data Coupling is something every developer should have. Keep in mind this is before functional programming and OOP.
I personally think it could be updated around his concepts of data homogeneity. Interfaces and first class functions are structures he didn't have available to him, but don’t require any new categories on his end, which is to say his critique still seems solid.
Overall, most Best Practices stuff all seem either derivative of or superfluous to this guy just actually classifying modules by their boundaries and data.
I should note, I haven't audited his design methodologies, which I'm sure are quite dated. His taxonomy concerning modules was enough for me.
"The Art of Software Testing" is another of his. I picked up his whole corpus concerning software on thriftbooks for like $20.
by EGreg on 1/4/23, 4:01 AM
Having said that, if you want to eke out another 3x throughput improvement, then by all means, grab your OpenSwoole or ReactPHP or AMPHP and go to town. But PHP already has Fibers, while OpenSwoole still has coroutines. Oh yeah, and the try/catch in OpenSwoole is broken so good luck catching stuff.
by dools on 1/4/23, 2:51 AM
Queues are awesome, just use queues almost all the time and then either build micro services or don’t.
by Sparkyte on 1/3/23, 7:53 PM
It is about where the shoe fits. If you become too heavily dependent on modules you risk module incompatibility due to version changes. If you are not the maintainer of your dependent module you hold a lot of risk. You don't get that with microservices.
If you focus too much on microservices you introduce virtualized bloat that adds too much complexity and complexities are bad.
Modules are like someone saying it is great to be monolithic. Noone should upright justify an overly complicated application or a monolithic one.
The solution is to build common modules that are maintainable. You follow that up with multi-container pods have them talk low level between each other.
Stricking that exact balance is what is needed not striking odd justifications for failed models. It is about, "What does my application do?" and answering with which design benefits it the most.
by CraigJPerry on 1/3/23, 1:15 PM
by lucasfcosta on 1/3/23, 1:50 PM
The way I usually describe my preferred heuristic to decide between modules and microservices is:
If you need to deploy the different parts of your software individually, and there’s a cost of opportunity in simply adopting a release train approach, go for microservices.
Otherwise, isolated modules are enough in the vast majority of cases.
by alkonaut on 1/3/23, 3:33 PM
A failed lookup of a function is greeted by "Do you want to import X so you can call foo()?". Having a battery of architectural unit tests or linters ensuring at module foo doesn't use module bar feels like a crutch.
Now, it might seem like making microservices just to accomplish modularization seems like a massive overkill and sa huge overhead for what should be accomlished at the language level - and you'd be right.
But that leads to the second largest appeal, which is closely related. The one thing that kills software is the big ball of mud where you can't really change that dependency, move to the next platform version or switch a database provider. Even in well-modularized code, you still share dependencies. You build all of it on react, or all the data is using Entity Framework or postgres. Because why not? Why would you want multiple hassles when one hassle is enough? But this really also means that when something is a poor fit for a new module, you shoehorn that module to use whatever all the other modules use (Postgres, Entity Framework, React...). With proper microservices, at least in theory you should be able to use multiple versions of the same frameworks, or different frameworks all together.
It should also be said that "modules vs microservices" is also a dichotomy that mostly applies in one niche of software development: Web/SaaS development. Everywhere else, they blur into one and the same, but sometimes surfacing e.g. in a desktop app offloading some task to separate processes for stability or resource usage (like a language server in an IDE).
by throwawaaarrgh on 1/3/23, 1:45 PM
But in general,just write some damn code. Presumably you have to write code, because building a software engineering department is one of the most difficult things you can do in order to solve a business problem. Even with the smartest engineers in the world (which you don't have), whatever you ship is inevitably going to end up an overly complex, expensive, bug-riddled maintenance nightmare, no matter what you do. Once you've made the decision to write code, just write the damn code, and plan to replace it every 3-5 years, because it probably will be anyway.
by anankaie on 1/3/23, 1:21 PM
Someday I will find a way to untangle this wishlist enough to turn into a design.
by rglover on 1/3/23, 7:11 PM
Push the monolith to production, monitoring performance, and if and when performance spikes in an unpleasant way, "offload" the performance intensive work to a separate job server that's vertically scaled (or a series of vertically scaled job servers that reference a synced work queue).
It's simple, predictable, and insanely easy to maintain. Zero dependency on third party nightmare stacks, crazy configs, etc. Works well for 1 developer or several developers.
A quote I heard recently that I absolutely love (from a DIY construction guy, Jeff Thorman): "everybody wants to solve a $100 problem with a $1000 solution."
by bayesian_horse on 1/3/23, 1:02 PM
by ladyattis on 1/3/23, 7:38 PM
by EdSharkey on 1/3/23, 4:56 PM
Ideally we could flexibly deploy services/components in the same way as WebLogic EJB. Discovery of where components live could be handled by the container and if services/components were deployed locally to one another, calls would be done locally without hitting the TCP/IP stack. I gather that systems like Kubernetes offer a lot of this kind of deployment flexibility/discovery, but I'd like to see it driven down into the languages/frameworks for maximum payoff.
Also, the right way to do microservices is for services to "own" all their own data and not call downstream services to get what they need. No n+1 problem allowed! This requires "inverting the arrows"/"don't call me, I'll call you" and few organizations have architectures that work that way - hence the fallacies of networked computing reference. Again, the services language/framework needs to prescribe ways of working that seamlessly establish (*and* can periodically/on-demand rebroadcast) data feeds that our upstreams need so they don't need to call us n+1-style.
Microservices are great to see, even with all the problems, they DO solve organizational scaling problems and let teams that hate each other work together productively. But, we have an industry immaturity problem with the architectures and software that is not in any big players' interest in solving because they like renting moar computers on the internet.
I have no actual solutions to offer, and there is no money in tools unless you are lucky and hellbent on succeeding like JetBrains.
by btbuildem on 1/3/23, 1:13 PM
One thing I've observed in a microservice-heavy shop before was that there was the Preferred Language and the Preferred Best Practices and they were the same or very similar across the multiple teams responsible for different things. It lead to a curious phenomenon, where despite the architectural modularity, the overall SAAS solution built upon these services felt very monolithic. It seemed counter-productive, because it weakened the motivation to keep separation across boundaries.
by ibejoeb on 1/3/23, 3:43 PM
If I want to calculate the price of a stock option, that's an excellent candidate to package into a module rather than to expose as a microservice. Even if I have to support different runtimes as presented in the article, it's trivial.
A different class of problem that doesn't modularize well, in shared library terms, is something with a significant timing component, or something with transient states. Perhaps I need to ingest some data, wait for some time, a process the data, and then continue. This is would likely benefit from being an isolated service, unless all of the other system components have similar infrastructure capabilities for time management and ephemeral storage.
by kjcondon on 1/3/23, 7:30 PM
I'm sure I could gain many of the advantages of microservices through an OSGi monolith, however, an OSGi monolith is not the hot thing of the day, I'm likely to be poorly supported if I go down this route.
Ideally I also want some of my developers to be able to write their server on the node ecosystem - if they so choose, and don't want updating the language my modules run on (in this case the JVM) to be the biggest pain of the century.
Besides, once my MAU is in the hundreds of thousands, I probably want to scale the different parts of my system independently anyway - so different concerns come in to play.
by danabrams on 1/3/23, 1:39 PM
Yes, the same often happens in microservices, but the extra complexity of a distributed system provides a slightly stronger nudge to decouple that means some teams at the margin do achieve something modular.
I’m something of a skeptic on modular monoliths until we as an industry adopt practices that encourage decoupling more than we currently do.
Yes, in theory they’re the same as microservices, without the distributed complexity, but in practice, microservices provide slightly better friction/incentives to decouple.
by lowbloodsugar on 1/4/23, 5:41 AM
I've done monoliths and microservices. I've worked in startups, SMEs and at FAANGS. As usual, nothing in this article demonstrates that the person has significant experience of running either in production. They may have experience of failure.
In my experience, microservices are simply one possible scaling model for modules. Another way to scale a monolith is to just make the server bigger: the One Giant Server model.
If you have a fairly well defined product, that needs a small (<30) number of engineers, then the One Giant Server model might be best for you. If you have a wide feature set, that requires >50 engineers, then microservices is probably the way to go.
There is no noticeable transition from a well implemented monolith with a small team into a well implemented Giant Server with a small team. Possibly some engineers are worrying about cold start times for 1TB of RAM, but that's something that can happen well ahead of time, and hardware qualification is something one needs to do for microservices too. Some of the best examples of Giant Servers are developed by small teams of very well paid developers.
The transition from a monolith to a set of microservices, however is a very different affair. Unfortunately, the kind of projects that need to go microservices are often in a very poor state. Many such adventures one reads about are having to go to microservices because they've already gone to One Giant Server and those are now unable to handle the load. Usually these stories are accompanied by a history of blog posts about moving fast and breaking things, yolo, or whatever is cool. The transition to microservices is difficult because there are no meaningful modules: no modules, or modules that all use each other's classes and functions.
I don't believe that, once a particular scale is reached, microservices are a choice. They are either required or they are not. Either you can scale successfully with One Giant Server, or you can't.
The problem is that below a certain scale, microservices are a drag. And without microservices, it's very easy for inexperienced teams to fail to keep module discipline.
by game_the0ry on 1/3/23, 7:42 PM
I like that architecture - the services are abstracted with clearly defined boundaries and they are easy to navigate / discover. Not sure if Java modules satisfy the concerns of the author or other HN users, but I liked it.
by danias on 1/4/23, 8:56 AM
by gillh on 1/3/23, 7:07 PM
Going back to systems thinking, flow control (concurrency and rate limiting) and API scheduling (weighted fair queuing) are needed to make these architectures work at any scale. Open source projects such as Aperture[0] can help tackle some these issues.
by discussDev on 1/3/23, 5:18 PM
by bob1029 on 1/3/23, 4:42 PM
Monolith is better in almost every way. The only thing that still bothers us is the build/iteration time. We could resolve this by breaking our gigantic dll into smaller ones that can be built more incrementally. This is on my list for 2023.
by deterministic on 1/4/23, 1:55 AM
I have worked on a monolith that solved the exact same problem. And it was straightforward to maintain and upgrade.
I feel sorry for future developers who will have to take over and maintain micro-services systems created today. Teams that can’t create maintainable, well designed monoliths, will create an even bigger cluster f** using micro-services.
by UltraViolence on 1/3/23, 4:33 PM
You only need to agree on an API between the modules and you're good to go!
Microservices suck dick and I hate the IT industry for jumping on this bandwagon (hype) without thoroughly discussing the benefits and drawbacks of the method. Debugging in itself is a pain with Microservices. Developing is a pain since you need n binaries running in your development environment.
by sergiomattei on 1/3/23, 1:08 PM
by tabtab on 1/3/23, 6:32 PM
by mcculley on 1/3/23, 1:12 PM
by smrtinsert on 1/3/23, 5:44 PM
by imwillofficial on 1/3/23, 1:03 PM
by qaq on 1/3/23, 11:08 PM
by pshirshov on 1/3/23, 12:55 PM
We delivered many talks on that subject and implemented an ultimate tool for that: https://github.com/7mind/izumi (the links to the talks are in the readme).
The library is for Scala, though all the principles may be reused in virtually any environment.
One of the notable mainstream (but dated) approaches is OSGi.
by maxrev17 on 1/3/23, 2:15 PM
by michaelfeathers on 1/3/23, 2:11 PM
https://michaelfeathers.silvrback.com/microservices-and-the-...
by banq on 1/4/23, 12:40 AM
by fedeb95 on 1/3/23, 1:13 PM
by techthumb on 1/3/23, 3:29 PM
I've often noticed that these boundaries are not considered when carving out microservices.
Subsequently, workarounds are put in place that tend to be complicated as they attempt to implement two phase commits.
by jhoelzel on 1/3/23, 2:05 PM
It's on the same page as "yes, we could have written this in assembler better" or "this could simply be a daemon, why is it a container?"
As if an agile, gitops based, rootlessly built, microservice oriented, worldwide clustered app will magially solve all your problems :D
If i learned anything it's to expect problems and build a stack that is dynamic enough to react. And that any modern stack includes the people managing it just as much as the code.
But yes, back when ASP.NET MVC came out i too wanted to rebuild the world using c# modules.
by d--b on 1/3/23, 1:02 PM
by newbieuser on 1/3/23, 4:29 PM
by ndemir on 1/6/23, 4:00 AM
by jpswade on 1/3/23, 7:19 PM
by martythemaniak on 1/3/23, 2:36 PM
by AcerbicZero on 1/3/23, 5:13 PM
by z3t4 on 1/3/23, 2:41 PM
by ascendantlogic on 1/4/23, 12:56 AM
by BulgarianIdiot on 1/3/23, 1:57 PM
And this is how I work all the time.
by codesnik on 1/3/23, 2:52 PM
by jimmont on 1/3/23, 6:01 PM
by heavenlyblue on 1/3/23, 2:07 PM
by jmartrican on 1/3/23, 4:05 PM
by HardlyCurious on 1/3/23, 4:59 PM
by boricj on 1/3/23, 5:42 PM
In Fuchsia, the device driver stack can be roughly split into three layers:
* Drivers, which are components (~ libraries with added metadata) that both ingest and expose capabilities,
* A capability-oriented IPC layer that works both inside and across processes,
* Driver hosts, which are processes that host driver instances.
The system then has the mechanism to realize a device driver graph by creating device driver instances and connecting them together through the IPC layer. What is interesting however is that there's also a policy that describes how the system should create boundaries between device driver instances [2].
For example, the system could have a policy where everything is instantiated inside the same driver host to maximize performance by eliminating inter-process communication and context switches, or where every device driver is instantiated into its own dedicated driver host to increase security through process isolation, or some middle ground compromise depending on security vs performance concerns.
For me, it feels like the Docker-like containerization paradigm is essentially an extension of good old user processes and IPC that stops at the process boundary, without any concern about what's going on inside it. It's like stacking premade Lego sets together into an application. What if we could start from raw Lego bricks instead and let an external policy dictate how to assemble them at run-time into a monolithic application running on a single server, micro-services distributed across the world or whatever hybrid architecture we want, with these bricks being none the wiser?
Heck, if we decomposed operating systems into those bricks, we could even imagine policies that would also enable composing from the ground up, with applications sitting on top of unikernels, microkernels or whatever hybrid we desire...
[1] https://fuchsia.dev/fuchsia-src/development/drivers/concepts...
[2] https://fuchsia.dev/fuchsia-src/development/drivers/concepts...
by hgsgm on 1/3/23, 1:04 PM
by exabrial on 1/4/23, 5:57 AM
Just because you can, doesn't mean you should.
by pulse7 on 1/3/23, 2:36 PM
by orose_ on 1/3/23, 1:49 PM