from Hacker News

That's not an abstraction, that's a layer of indirection

by fagnerbrack on 12/25/24, 5:06 AM with 225 comments

  • by voidhorse on 12/28/24, 5:57 AM

    The best way to achieve a good abstraction is to recall what the word meant before computer science: namely, something closer to generalization.

    In computing, we emphasize the communicational (i.e. interface) aspects of our code, and, in this respect, tend to focus on an "abstraction"'s role in hiding information. But a good abstraction does more than simply hide detail, it generalizes particulars into a new kind of "object" that is easier to reason about.

    If you keep this in mind, you'll realize that having a lot of particulars to identify shared properties that you can abstract away is a prerequisite. The best abstractions I've seen have always come into being only after a significant amount of particularized code had already been written. It is only then that you can identify the actual common properties and patterns of use. Contrarily, abstractions that are built upfront to try and do little more than hide details or to account for potential similarities or complexity, instead of actual already existent complexity are typically far more confusing and poorly designed.

  • by rauljara on 12/28/24, 2:46 PM

    I wish articles like this had more examples in them. In between “this thin wrapper adds no value but a lot of complexity”, and “this thin wrapper clarified the interface and demonstrably saved loads of work last time requirements changed” is an awful lot of grey area and nuance.

    I did like the advice that if you peak under the abstraction a lot, it’s probably a bad one, tho even this I feel could use some nuance. I think if you need to change things in lots of places that’s a sign of a bad abstraction. If there is some tricky bit of complexity with changing requirements, you might find yourself “peeking under the hood” a lot. How could it be otherwise? But if you find yourself only debugging the one piece of code that handles the trickiness, and building up an isolated test for that bit of code, well, that sounds like you built a wonderful abstraction despite it being peaked at quite a bit.

  • by danparsonson on 12/28/24, 3:14 AM

    Perhaps this is a minor nitpick, but

    > Abstractions are also the enemy of simplicity. Each new abstraction is supposed to make things simpler—that’s the promise, right?

    Not exactly, no. The purpose of abstraction is to hide implementation detail, and thereby insulate one part of the codebase/application/system from variations in another. Graphics APIs for example - yes your code may be simpler for not having to deal with the register-level minutiae of pushing individual triangles, but the core benefit is that the same code should work on multiple different hardware devices.

    Good abstractions break a codebase up into compartments - if you drop a grenade in one (change the requirements for example), then the others are unaffected and the remedial work required is much less.

  • by noduerme on 12/28/24, 4:43 AM

    I got a piece of advice writing UI code a long time ago: Don't marry your display code to your business logic.

    I'd like to say this has served me well. It's the reason I never went for JSX or other frameworks that put logical code into templates or things like that. That is one abstraction I found unhelpful.

    However, I've come around to not taking that advice as literally as I used to. Looking back over 25 years of code, I can see a lot of times I tried to abstract away display code in ways that made it exceedingly difficult to detect why it only failed on certain pieces of data. Sometimes this was sheep shaving tightly bound code into generic routines, and sometimes it was planned that way. This is another type of abstraction that adds cognitive load: One where instead of writing wrappers for a specific use case, you try to generalize everything you write to account for all possible use cases in advance.

    There's some sort of balance that has to be struck between these two poles. The older I get, though, the more I suspect that whatever balance I strike today I'll find unsatisfactory if I have to revisit the code in ten years.

  • by Darmani on 12/28/24, 5:06 AM

    TCP is great. Long chains of one-line functions that just permute the arguments really suck. These both get called abstraction, and yet they're quite different.

    But then you hear people describe abstraction ahem abstractly. "Abstraction lets you think at a higher level," "abstraction hides implementation detail," and it's clear that neither of those things are really abstractions.

    As the OP mentions, we have a great term for those long chains of one-line functions: indirection. But what is TCP? TCP is a protocol. It is not just giving a higher-level way to think about the levels underneath it in the 7-layer networking model. It is not just something that hides the implementations of the IP or Ethernet protocols. It is its own implementation of a new thing. TCP has its own interface and its own promises made to consumers. It is implemented using lower-level protocols, yes, but it adds something that was fundamentally not there before.

    I think things like TCP, the idea of a file, and the idea of a thread are best put into another category. They are not simply higher level lenses to the network, the hard drive, or the preemptive interrupt feature of a processor. They are concepts, as described in Daniel Jackson's book "The Essence of Software," by far the best software design book I've read.

    There is something else that does match the way people talk about abstraction. When you say "This function changes this library from the uninitialized state to the initialized state," you have collapsed the exponentially-large number of settings of bits it could actually be in down to two abstract states, "uninitialized" and "initialized," while claiming that this simpler description provides a useful model for describing the behavior of that and other functions. That's the thing that fulfills Dijkstra's famous edict about abstraction, that it "create[s] a new semantic level in which one can be absolutely precise." And it's not part of the code itself, but rather a tool that can be used to describe code.

    It takes a lot more to explain true abstraction, but I've already written this up (cf.: https://news.ycombinator.com/item?id=30840873 ). And I encourage anyone who still wants to understand abstraction more deeply to go to the primary sources and try to understand abstract interpretation in program analysis or abstraction refinement in formal verification and program derivation.

  • by jongjong on 12/28/24, 3:24 AM

    This is a great point. Most modern software is riddled with unnecessary complexity which adds mental load, forces you to learn new concepts that are equally complex or more complex than the logic which they claim to abstract away from.

    I find myself saying this over and over again; if the abstraction does not bring the code closer to the business domain; if it does not make it easier for you to explain the code to a non-technical person, then it's a poor abstraction.

    Inventing technical constructs which simply shift the focus away from other technical constructs adds no value at all. Usually such reframing of logic only serves the person who wrote 'the abstraction' to navigate their own biased mental models, it doesn't simplify the logic from the perspective of anyone else.

  • by pdpi on 12/28/24, 9:07 AM

    > Think of a thin wrapper over a function, one that adds no behavior but adds an extra layer to navigate. You've surely encountered these—classes, methods, or interfaces that merely pass data around, making the system more difficult to trace, debug, and understand. These aren't abstractions; they're just layers of indirection.

    “No added behaviour” wrapper functions add a lot of value, when done right.

    First off, they’re a good name away from separating what you’re doing from how you’re doing it.

    Second, they’re often part of a set. E.g. using a vector for a stack, push(x) can be just a call to append(x), but pop() needs to both read and delete the end of the vector. Push in isolation looks like useless indirection, but push/pop as a pair are a useful abstraction.

    A consequence of adding these two points together is that, if you have a good abstraction, and you have a good implementation that maps well to the abstraction, it looks like useless indirection.

    Another consequence is that those pass-through wrapper functions tell you how I think the implementation maps to the domain logic. In the presence of a bug, it helps you determine whether I got the sequence of steps wrong, or got the implementation wrong for one of the steps.

    Ultimately, the two aren’t completely independent —indirection is one of the tools we have available to us to build abstractions with. Yes, people misuse it, and abuse it, and we should be more careful with it in general. But it’s still a damned useful tool.

  • by havkom on 12/28/24, 1:04 PM

    I have seen tons of ”abstractions” in recently created code bases from ”senior developers” which in actual fact is only titanic-grade mess of complicated ”indirection”. Many people nowadays are unfortunately not fit to work in software development.
  • by ChrisMarshallNY on 12/28/24, 5:20 AM

    > a bad one turns every small bug into an excavation.

    I find that I need to debug my abstractions frequently, while I’m first writing my code, then I never need to dig into them, ever again, or they do their job, and let me deal with adding/removing functionality, in the future, while not touching most of the code.

    That’s why I use them. That’s what they are supposed to do.

    Because they are abstractions, this initial debugging is often a lot harder than it might be for “straight-through” code, but is made easier, because the code architecture is still fresh in my mind; where it would be quite challenging, coming at it without that knowledge.

    If I decide it’s a “bad abstraction,” because of that initial debugging, and destroy or perforate it, then what happens after, is my own fault.

    I’ve been using layers, modules, and abstractions, for decades.

    Just today, I released an update to a shipping app, that adds some huge changes, while barely affecting the user experience (except maybe, making it better).

    I had to spend a great deal of time testing (and addressing small issues, far above the abstractions), but implementing the major changes was insanely easy. I swapped out an entire server SDK for the “killer feature” of the app.

  • by toolslive on 12/28/24, 8:46 PM

    I don't consider TCP an abstraction at all. The abstraction is the unix API over it, and then again, the

    > ssize_t send (int socket, const void *buffer, size_t size, int flags)

    is not a nice one. When was the last time you had the data in a buffer, wanted to send it over to the peer at the other side, but didn't mind that it's not sent in it's entirety ? So you have to write a loop over it. Also, is the call blocking or not ? (well, you'll have to read the code that created it to know, so that's no fun neither).

    However, it does prove the point the author is trying to make: good abstractions are hard to find!

    Anyway, I tried to think of a better example of a good abstraction and found the "Sequence" that's available in plenty of programming languages. You don't need to now what the exact implementation is (is it a list, a tree, don't care!) to be able to use it. Other example I found were Monoid and Monad, but that's tied to the functional paradigm so you lose most of the audience.

  • by mrcsd on 12/28/24, 11:38 AM

    Just thinking on my feet as to how I separate abstractions from indirections and it seems to me that there's a relatively decent rule of thumb to distinguish them: When layer A of code wraps layer B, then there are a few cases:

        1) If A is functionally identical to B, then A is a layer of indirection
        2) If A is functionally distinct from B, then A is likely an abstraction
        3) If A is functionally distinct from B, but B must be considered when 
           handling A, then A is a leaky abstraction.
    
    The idea is that we try to identify layers of indirection by the fact that they don't provide any functional "value".
  • by noodletheworld on 12/28/24, 6:30 AM

    Pretty easy to give generic advice without examples.

    “Write more tests, but not too many”

    “Use good abstractions where appropriate?”

    “The next time you reach for an abstraction, ask yourself: Is this truly simplifying the system? Or is it just another layer of indirection?”

    It’s easy to create a strawman here (the FactoryAdaptorMapper or whatever) but in reality this kind of generic advice doesn’t help anyone.

    Of course people want to use good abstractions.

    That’s not the problem.

    The problem is being able to tell the difference between generic arbitrary advice (like this post) and how your specific code base needs to use abstractions.

    …and bluntly, the only way to know, is to either a) get experience in the code base or b) read the code that others have left there before you.

    If it’s a new project, and you’re not familiar with the domain you’ll do it wrong.

    Every. Single. Time.

    So, picking “good” abstractions is a fools game.

    You’ll pick the wrong ones. You’ll have to refactor.

    That’s the skill; the advice to take away; how to peel back the wrong abstraction and replace it with your next best guess at a good one. How to read what’s there and understand what the smart folk before did and why.

    …so, I find this kind of article sort of arrogant.

    Oh, you want to be a great programmer?

    Just program good code. Use good abstractions. Don’t leave any technical debt. Job done!

    …a few concrete examples would go a long way here…

  • by Voultapher on 12/28/24, 10:42 AM

    > That’s the sign of a great abstraction. It allows us to operate as if the underlying complexity simply doesn't exist.

    While I generally agree with the sentiment that current day software development is too indirection heavy, I'm not sure I agree with that point. All abstractions are leaky and sure good abstractions allow you to treat it like a black box, but at some point you'd benefit from knowing how the sauce is made, and in others you'll be stuck with some intractable problem if you lack knowledge of the underlying layers.

  • by globular-toast on 12/28/24, 9:57 AM

    While I too like to marvel at the TCP/IP stack as an example of abstraction done right, it would be unwise to think an abstraction is only "good" if you get it right first time.

    The real point of abstraction is to enable software that is adaptable. If you are ever sure you can write a program the first time and get it perfect then you don't need to bother with any of this thinking. We do that all the time when writing ad hoc scripts to do particular tasks. They do their job and that's that.

    But if you ever think software will continue to be used then you can almost guarantee that it will need to change at some point. If it is just a tiny script it's no problem to write it again, but that's not going to be acceptable for larger programs.

    So this necessarily means that some layer or layers of your well-architected application will have to change. That does not mean it was a bad abstraction.

    Abstraction is not about hiding things, it's about building higher levels of language. It enables you to work on individual layers or components without breaking the rest of the system. It very much should not be hiding things, because those things are likely to need to change. The bits that really don't change much, like TCP, are rarely written into application code.

  • by mightyham on 12/28/24, 3:33 AM

    I forget which programming talk I watched which pointed this out, but one extremely common example of this in Java is recreating subsets of the Collections API. I've done this before, heck even the Java standard library is guilty of this problem. When a class has a full set of get/put/has/remove methods, it is often not actually hiding the complexity of its component data structures.
  • by getnormality on 12/28/24, 3:37 AM

    How did "abstraction" and "hiding complexity" become perceived as such fundamental virtues in software development? There are actual virtues in that ballpark - reusable, reliable, flexible - but creating abstractions and hiding complexity does not necessarily lead to these virtues. Abstraction sounds no more virtuous to me than indirection.
  • by mannyv on 12/28/24, 6:17 AM

    Abstraction hides detail, but at what coat?

    A network close call at a high level closes a network socket. But at the tcp level there's a difference between close and reset. Which do you want? Your api has removed that choice from you, and if you look you will have no idea if rhe close api does a close or a reset.

    Is the difference important? If depends. If you have a bunch of half open sockets and run out of file descriptors then it becomes very important.

    Another example: you call read() on a file, and read 10k bytes. Did you know your library was reading 1 byte at a time unbuffered? This abstraction will/can cause massive performance problems.

    My favorite one is when a programmer iterates over an ORM-enabled array. Yes, let's do 50,000 queries instead of one because databases are too complicated to learn.

    Just like any tool, abstraction has costs and benefits. The problem is that lots of people ignore the cost, and assume the benefit.

  • by johnfn on 12/28/24, 8:41 PM

    I found this to be a pretty poor article. The article lacks concrete examples and speaks in generalities to explain its core thesis. But without specificity, a reader can nod along, sure in the knowledge that they already are wise and follow this advice and it's everyone else who's out there mucking up code bases with layers upon layers of garbage. I mean,

    > The next time you reach for an abstraction, ask yourself: Is this truly simplifying the system? Or is it just another layer of indirection?

    Is anyone reading this truly going to alter their behavior? If I could recognize that my abstraction was "just another layer of indirection" as simple as that, obviously I wouldn't have added it in the first place!

  • by phtrivier on 12/28/24, 8:47 PM

    Let's not forget about a particularly frustrating kind of "level of abstraction": the bespoke interface to a part of the code that has side effect, and that has exactly two implementation : one in the production code, and one in the tests.

    If I were to create a language tomorrow, that's the one aspect where I would try something that I have not yet found elsewhere : can you make it so that you can "plug" test doubles only for test, but keep the production path completely devoid of indirection or late binding.

    (I'm curious if you know of a language that already does that. I suppose you can hack something in C with #ifdef, of course...)

  • by alexvitkov on 12/28/24, 2:29 PM

    While I wholeheartedly agree with the premise, the article doesn't really say anything other than "TCP good, your abstraction bad, don't use abstraction".
  • by mixermachine on 12/28/24, 7:08 AM

    Reminds me of an old Java Android project I encountered.

    EVERY class implemented an interface. 98% of interfaces had one implementation.

    Every programmer was applying a different programming pattern. A lot of abstractions seemed incomplete and did not work.

    Proguard (mostly used for code obfuscation for Android apps) definitions were collected in the top module even though the project had multiple modules. Half of the definitions were no longer needed and the code was badly obfuscated. Problems were solved by continuesly adding classes and checking what sticks.

    The UI was controlled by a stateful machine. State transitions were scatter everywhere in the code with lots of conditions in unforeseen places.

    Legacy code was everywhere because no one wanted to risk a very long debugging session of an unforseen change.

    No API definitions. Just Maps that get send via REST to URLs.

    By biggest mistake was to not directly rewrite this project when I entered the team. We did after one year.

  • by ericflo on 12/28/24, 6:51 AM

  • by psychoslave on 12/28/24, 10:00 AM

    The article seems to go with the premise that abstractions are most often carelessly introduced when there is an obvious alternative that is simpler and more performant.

    Yes, abstractions have a cost that will accumulate as they are layered.

    But simple elegant solutions are not free. They are hard to come with, so they often need large amount of dedication ahead of any coding. And as long as we don't deliver anything, we have no clue what actual requirements we miss in our assumptions.

    The road to reach the nice simple solutions is more often than not to go through some some clunky ugly solutions.

    So rather than to conclude with "before running to abstraction think wisely", I would rather recommend "run, and once you'll have some idea of what was the uncharted territory like, think about how to make it more practical for future walks."

  • by aktenlage on 12/28/24, 8:33 AM

    Interesting read, although I don't agree with everything. I like the distinction between different qualities of abstractions, made in the beginning. The following bashing of abstractions is too generalized for my taste.

    The best part comes close to the end:

    > Asymmetry of abstraction costs

    > There’s also a certain asymmetry to abstraction. The author of an abstraction enjoys its benefits immediately—it makes their code look cleaner, easier to write, more elegant, or perhaps more flexible. But the cost of maintaining that abstraction often falls on others: future developers, maintainers, and performance engineers who have to work with the code. They’re the ones who have to peel back the layers, trace the indirections, and make sense of how things fit together. They’re the ones paying the real cost of unnecessary abstraction.

  • by marginalia_nu on 12/28/24, 3:39 PM

    While I think it's good the pendulum is swinging toward a more restrictive approach to abstractions, we've (and I've) certainly been leaning a bit too much toward just solving every problems by adding a layer of indirection around it, and such onion-layered designs tend to (as the metaphor implies) cause a lot of tears when you cut through them. That said, it's not like abstraction itself is bad.

    A big part of the problem is arguably that IDEs make code navigation easier, which has us adding all these indirections and discover only when it's too late what a horrible maze we've built. Being more judicious about adding indirection really does help force better designs.

  • by thomasjudge on 12/28/24, 4:51 AM

    I wish this article had more examples/details; as it is, it is kind of .. abstract
  • by lifeisstillgood on 12/28/24, 9:43 AM

    I suggest there are three types of layer that one passes through

    Abstraction - this thing of rare beauty

    Decision - often confused for abstraction and wrapper, this is best thought of as a case statement in a function. They are wildly better in my opinion than lots of classes

    Wrapper - either fluff like getters and setters or placeholders for later decisions (acceptable) or weird classes and instances that the language affords but tend to be confusing - what is called indirection in the article

    Tools, utils, libraries - these are I classify as handles / affordances for other code to use - maybe they add layers but they add a single way in to the nice abstraction above.

  • by mitch-crn on 12/28/24, 1:41 PM

    This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. -Doug McIlroy
  • by Garlef on 12/28/24, 11:10 AM

    Wow... That was a lot of text without much depth to it.

    [Edit] To make my criticism more precise: The text mostly rephrases it's central point a few times and presents these rephrasings as arguments.

  • by anonytrary on 12/28/24, 7:11 AM

    I'm not sure what it's called (abstraction vs. indirection) but I dislike when everything needs a class/object with some odd combination of curried functions. Some programming languages force this on you more than others I think? As a contrived example "StringManager.SlicingManager.sliceStringMaker(0)(24)(myStr)", I've seen code that reminds me of this and wonder why anyone uses a language where this not only an acceptable idiom, but a preferred one.
  • by ozim on 12/28/24, 8:50 AM

    Lots of crud apps add 3-tier architecture that end up something that could be 2 tier.

    People add it just in case but the case never materializes- for some probably do but ones I worked with not.

  • by ciwchris on 12/30/24, 5:30 PM

    I can't help but wonder whether the problem is subjective, to each person and what they need to accomplish at the time. What is cognitive load and indirection to one at one time is a simple abstraction to another at another time.

    And so I wonder if a solution to this is for editors to be able to represent the code differently, depending on what the person's needs are at the time.

  • by sltr on 12/28/24, 2:58 PM

    When discussing the definition of abstraction, Koppel's article "Abstraction: Not What You Think It Is" offers a helpful framing and disambiguation.

    https://www.pathsensitive.com/2022/03/abstraction-not-what-y...

  • by kristiandupont on 12/28/24, 9:38 AM

    Indirection serves a purpose as well. One that is related to, but not the same as abstractions. When you add a layer of indirection, you make it easier to, say, delete or change every item X instead of iterating through everything.

    Unnecessary or redundant levels of indirection are bad, just like unnecessary or wrong abstractions are. But when applied correctly, they are useful.

  • by freetonik on 12/28/24, 11:22 AM

    Shameless plug: a while ago I made a video explaining the idea of abstraction in computer science, and it seems to be helpful for beginners: https://youtu.be/_y-5nZAbgt4
  • by nwmcsween on 12/28/24, 11:40 AM

    The goal of an abstraction should be to make reasoning about the code easier, generally that means hiding complexity but that shouldn't be the goal.

    In my opinion a common issue in programming is premature abstraction without understanding the interactions as a whole.

  • by weMadeThat on 12/29/24, 9:48 AM

    GangBang-Abstractions would be a good name. Abusive, damaging the respective micro-biome ( parts of the code/system ) almost irreversibly and passing around the data for brutal telemetric exploitation ...
  • by oalae5niMiel7qu on 12/29/24, 1:30 AM

    > When was the last time you had to debug TCP at the level of packets? For most of us, the answer is never.

    Who are you people who never have to debug TCP problems? I've had to do it on multiple occasions.

  • by sixthDot on 12/28/24, 10:44 AM

    Another criticism would be the time spent to compile those abstractions, even if in fine the "zero cost" goal _at runtime_ is reached.
  • by wvlia5 on 1/1/25, 1:42 AM

    Chuck Moore, the creator of Forth, was famous for implementing solutions without layers of abstraction.
  • by scotty79 on 12/28/24, 11:10 AM

    Every problem can be solved by adding a layer of abstraction. Except for the problem of having too many layers of abstraction.
  • by mrkeen on 12/28/24, 12:07 PM

    It starts with a strong point that abstraction is not indirection, but then slips back into using the terms interchangeably.
  • by Gehinnn on 12/28/24, 1:17 PM

    A good abstraction shouldn't make its usage shorter, it should make the proof that the usage is correct shorter.

    This usually means the total amount of assumptions needed to prove everything correct is decreased. (when the code is not actually formally verified, think of "proof length" as mental capacity needed to check that some unit of code behaves as intended)

  • by herdcall on 12/28/24, 3:07 PM

    To me, the value of abstraction is more about making the code flexible than hiding complexity.
  • by nmilo on 12/29/24, 12:22 AM

    The author misses the whole point of abstractions, and that is a layer B that covers layer A so completely that no one using layer B needs to know how layer A works at all, except for those working on the layer A/B bridge.

    For example, binary logic is a perfect abstraction over semiconductor physics. No one doing computer science needs to understand anymore the complexities of voltages and transistors and whatever. TCP is a perfect abstraction over IP. Memory as a big array of bytes is a perfect abstraction over the intricacies of timing DRAM refreshes.

    And that's about it. No one reading this post has written an abstraction ever. (a leaky abstraction is not an abstraction). So yes, actually, abstractions are free, and they don't leak. That's the whole point. The problem is that what you call an abstraction isn't an abstraction.

    Computer science's complete failure to create a new abstraction since like TCP intrigues me. Why don't we have a system X that abstracts over memory accesses so well that no one needs to know anymore how caches or memory locality works? Why aren't there entire subfields studying this?

  • by theGnuMe on 12/28/24, 12:21 PM

    Nice to see hacker news return to its roots.
  • by feverzsj on 12/28/24, 4:17 AM

    What about java?
  • by PaulHoule on 12/28/24, 2:50 PM

    Without more details his position rubs me the wrong way.

    As somebody who has done a huge amount of "fix this bug" and "add this feature" on existing code bases I think excessive use of cut and paste is the worst problem in the industry. Cut-and-paste is the devil's own "design pattern" as it is a practice that gets repeated throughout a code base to solve various problems.

    When it comes to bugs repetition means a bug might have 13 copies throughout the code and you could easily get the ticket sent back 2 or 3 times because you didn't find all the copies at first.

    Repetition (together with poorly chosen abstractions) also causes features that should add in complexity to multiply, as if I have 3 versions of a function and now something can vary 5 ways I now have 15 functions. In a good design you might pass one of 8 functions to a 9th function. Repeat this a few times and one guy has 98 functions and the other guy would have had 13200 if he'd been able to get that far.

    Granted the speed demon won't like all that function calling, right now I am thinking about writing a big switch statement for a CPU emulator, I get it, for you there is code generation.

    It is also healthy to have "fear of framework planets", a horrible example is react-router which has gotten up to incompatible version 7 because (i) it's the kind of thing you can write in an afternoon (it would take more time to write good documentation but... take a look at that documentation) and (ii) the authors never liked any of the frameworks they created. More than once I have dug into a busted app written by a fresher where there was a copy of react-router and there were some use's from it in use but they had bypassed react-router and parsed document.location directly to figure out what to display. The very existence of a bad framework creates a kind of helplessness.

    Those folks will say various versions of react-router support React Native, SSR, etc. We don't use any of those where I work, I don't care.

    It is a very good prop bet that you can dramatically speed up so-and-so's program by switching from AoS to SoA.

    https://en.wikipedia.org/wiki/AoS_and_SoA

    (If it's a Java program, you eliminate the overhead of N objects to start with)

    but it's tricky to implement arbitrary algorithms, my mental model to do it is to build programs out of relational operators (even in my head or on paper) SQL is one of the greatest abstractions of all time as I can write a SQL query and have it be run AoS or SoA or some hybrid as well as take advantage of SIMD, SMT, GPU and MP parallelism. 10 years ago I would have said I could have beat any SQL engine with hand-optimized code, today products like

    https://duckdb.org/

    would make that harder.

  • by VirusNewbie on 12/28/24, 4:42 AM

    >There’s a well-known saying: "All abstractions leak." It’s true. No matter how good the abstraction, eventually, you’ll run into situations where you need to understand the underlying implementation details

    This is false. One can read up on Theorem's for Free by Wadler to see that not all abstractions are leaky.