from Hacker News

“Clean” code, horrible performance

by eapriv on 2/28/23, 5:55 AM with 907 comments

  • by mabbo on 2/28/23, 9:02 PM

    I think the author is taking general advice and applying it to a niche situation.

    > So by violating the first rule of clean code — which is one of its central tenants — we are able to drop from 35 cycles per shape to 24 cycles per shape

    Look, most modern software is spending 99.9% of the time waiting for user input, and 0.1% of the time actually calculating something. If you're writing a AAA video game, or high performance calculation software then sure, go crazy, get those improvements.

    But most of us aren't doing that. Most developers are doing work where the biggest problem is adding the next umpteenth features that Product has planned (but hasn't told us about yet). Clean code optimizes for improving time-to-market for those features, and not for the CPU doing less work.

  • by GrumpySloth on 2/28/23, 11:08 PM

    This thread is, predictably, another demonstration of conflating optimisation with being aware of performance.

    The presented transformation of code away from “clean” code had nothing to do with optimisation. In fact, it made the code more readable IMO. Then it demonstrated that most of those “clean” code commandments are detrimental to performance. So obviously when people saw the word “performance”, they immediately jumped “omg, you’re optimising, stop immediately!”

    Another irritating reaction here is the straw man of optimising every last instruction: the course so far has been about demonstrating how much performance there even is on the table with reasonable code, to build up an intuition about what orders of magnitude are even possible. Casey repeated several times that what level of performance is right for your situation will depend on you and your situation. But you should be aware of what’s possible, about the multipliers you get from all those decisions.

    And of course people bring up profilers: no profiler will tell you whether a function is optimal or not — only what portion of runtime is spent where. And if all your life you’ve been programming in Python, then your intuition about performance often is on the level of “well I guess in C it could be 5-10 times faster; I’ll focus on something else”, which always comes up in response to complaints about Python. Not even close.

  • by stunpix on 2/28/23, 3:52 PM

    So he puts polymorphic function calls into enormous loops to simulate a heavy load with a huge amount of data to conclude "we have 20x loss in performance everywhere"? He is either a huge troll or he has a typical fallacy of premature optimization: if we would call this virtual method 1 billion times we will lose hours per day, but if we optimize it will take less than a second! The real situation: a virtual method is called only a few hundred times and is barely visible in profiling tools.

    No one is working with a huge amount of data in big loops using virtual methods to take every element out of a huge dataset like he is showing. That's a false pre-position he is trying to debunk. Polymorphic classes/structs are used to represent some business logic of applications or structured data with a few hundred such objects that keep some states and a small amount of other data so they are never involved in intensive computations as he shows. In real projects, such "horrible" polymorphic calls never pop up under profiling and usually occupy a fraction of a percent overall.

  • by thom on 2/28/23, 11:30 AM

    There is no doubting Casey's chops when he talks about performance, but as someone who has spent many hours watching (and enjoying!) his videos, as he stares puzzled at compiler errors, scrolls up and down endlessly at code he no longer remembers writing, and then - when it finally does compile - immediately has to dig into the debugger to work out something else that's gone wrong, I suspect the real answer to programmer happiness is somewhere in the middle.
  • by fredrikholm on 2/28/23, 7:28 AM

    I've worked in projects where no one seemed to know SQL, where massive speed improvements were made by fixing very low hanging fruits like removing select * queries, adding naive indexes, removing N+1 queries etc.

    Likewise, I've worked in code bases where performance had been dreadful, yet there were no obvious bottlenecks. Little by little, replacing iterators with loops, objects/closures with enum-backed structs/tables, early exits and so on accumulating to the point where speed ups ranged from 2X to 50X without changing algorithms (outside of fixing basic mistakes like not pre allocating vectors).

    Always fun to see these videos. I highly recommend his `Performance Aware Programming` course linked in the description. It's concise and to the point, which is a nice break from his more casual videos/streams which tend to be long-winded/ranty.

  • by vore on 2/28/23, 9:11 AM

    This guy is so dogmatic about it it hurts. I would argue that clean code is a spectrum from how flexible vs how rigid you want your abstractions to be. If your abstractions are too flexible for good performance, dial them back when you see the issue. If your abstractions are too rigid for your software to be extendable, then introduce indirection.

    We can all write code that glues a very fixed set of things end to end and squeeze every last CPU cycle of performance out of it, but as we all know, software requirements change, and things like polymorphism allow for much better composition of functionality.

  • by guhcampos on 3/1/23, 9:13 AM

    I don't like most of these "principles", as anyone can verify by looking at my previous comments, but this article is cherry-picking to its utmost level of unfairness.

    These "clean code" principles should not, and generally are not, ever used at performance critical code, in particular computer graphics. I've never seen anyone seriously try to write computer graphics while "keeping functions small" and "not mixing levels of abstraction". We can go further: you won't be going anywhere in computer graphics by trying to "write pure functions" or "avoiding side effects".

    These "clean code principles" are, however, rather useful for large, corporate systems, with loads of business process rules maintained by different teams of people, working for multiple third parties with poor job retaining. You don't need to think about vector performance for processing credit card payments. You don't need to think about input latency for batch processing data warehouse jobs, but you need this types of applications to work reliably. Way more reliably than a videogame or a streaming service.

    Right tools for the right jobs, people need to stop trying to hammer everything into the same tools. This is not only a bad practice in software, it's a bad practice in life, the search for an ever elusive silver bullet, a panacea, a miracle. Just drop it and get real.

  • by lumb63 on 2/28/23, 10:58 PM

    I don’t understand why there is still the false dichotomy between performance and speed of development/readability. Arguments on HN and in other software circles suggest performant code cannot be well organized, and that well organized code cannot be performant. That’s false.

    In my experience, writing the code with readability, ease of maintenance, and performance all in mind gets you 90% of each of the benefits you’d have gotten focusing on only one of the above. For instance, maybe instead of pretending that an O(n^2) algorithm is any “cleaner” than an O(n log n) algorithm because it was easier for you to write, maybe just use the better algorithm. Or, instead of pretending Python is more readable or easier to develop in than Rust (assuming developers are skilled in both), just write it in Rust. Or, instead of pretending that you had to write raw assembly to eke out the last drop of performance in your function, maybe target the giant mess elsewhere in your application where 80% of the time is spent.

    A lot of the “clean” vs “fast” argument is, as I’ve said above, pretending. People on both sides pretend you cannot have both, ever, when in actuality you can have almost all of what is desired in 95% of cases.

  • by ysavir on 2/28/23, 9:11 PM

    As a general rule, optimize for your bottlenecks.

    If you have a large sum of I/O and can see the latency tracked and which parts of the code are problematic, optimize those parts for execution speed.

    If you have frequent code changes with an evolving product, and I/O that doesn't raise concerns, then optimize for code cleanliness.

    Never reach for a solution before you understand the problem. Once you understand the problem, you won't have to search for a solution; the solution will be right in front of you.

    Don't put too much stock in articles or arguments that stress solutions to imaginary problems. They aren't meant to help you. Appreciate any decent take-aways you can, make the most of them, but when it comes to your own implementations, start by understanding your own problems, and not any rules, blog titles, or dogmas you've previously come across.

  • by noobermin on 3/1/23, 8:06 AM

    This comment section just shows how so many developers are victims of group think. Here is actual evidence that at least hints that the primary paradigm is wrong, and immediately a bunch of nerds jump and attack, instead of taking the criticism in good faith.

    Compare this to discussions about FP, new languages like Rust, and so forth. This really demonstrates the primary vogue mindset is increasing complexity and hierarchy to the detriment of all else, and is why the supposed new paradigms of "modern software development" are not really that new but just evolutions of the current paradigms. You really touch what is a culture's sacred cows by that which attracts criticism without any real sincere rebuttal.

  • by xupybd on 2/28/23, 9:32 PM

    >It simply cannot be the case that we're willing to give up a decade or more of hardware performance just to make programmers’ lives a little bit easier. Our job is to write programs that run well on the hardware that we are given. If this is how bad these rules cause software to perform, they simply aren't acceptable.

    That is not our job! Our job is to solve business problems within the constraints we are given. No one cares how well it runs on the hardware we're given. They care if it solves the business problem. Look at Bitcoin, it burns hardware time as a proof of work. That solves a business problem.

    Some programmers work in industries where performance is key but I'd bet not most.

    CPU cycles are much cheaper than developer wages.

  • by nimih on 2/28/23, 9:31 PM

    > Our job is to write programs that run well on the hardware that we are given.

    The author seems to be neglecting the fact that the whole point of “clean code” is to improve the likelihood of achieving the first goal (code that runs well, i.e. correctly) across months/years of changing requirements and new maintainers. No one (that I’ve ever spoken to or worked with, at least) is under any illusions that you can almost always trade off maintainability for performance.

    Admittedly, I think a lot of prescriptions that get made in service of “clean code” are silly or counterproductive, and people who obsess over it as an end unto itself can sometimes be tedious and annoying, but this article is written in such incredible bad faith that its impossible to take seriously.

  • by boredumb on 2/28/23, 9:27 PM

    I've seen clean code lead to over-architected and unmaintainable nightmares that ended up with incorrect abstractions that became much more of a problem than performance.

    The more the years pile up the more I agree with the sentiment in this post, generally going for something that works and is as optimal of code as I would get if I was to come back to make it more performance oriented in the future, I end up with something generally as simple as I can get. Languages and libs are generally abstract enough in most cases and any extra design patterning and abstracting is generally going to bite you in the ass more than it's going to save you from business folks coming in with unknown features.

    I suppose, write code that is conscious of its memory and CPU footprint and avoid trying to guess what features may or may not reuse what parts from your existing routines, and try even harder to avoid writing abstractions that are based on your guesses of the future.

  • by pron on 2/28/23, 11:51 AM

    Much of this is very compiler dependent. For example, Java's compiler is generally able to perform more aggressive optimisations, and even virtual calls are often, and even usually, inlined (so if at a particular call-site only one shape is encountered, there won't even be a branch, just straight inline code, and if there are only two or three shapes, the call would compile to a branch; only if there are more, i.e. a "megamorphic" call site will a vtable indirection actually take place). There is no general way of concluding that a virtual call is more or less costly than a branch, but the best approximation is "about the same."

    Having said that, even Java now encourages programmers to use algebraic data types when "programming in the small", and OOP/encapsulation at module boundaries: https://www.infoq.com/articles/data-oriented-programming-jav... though not for performance reasons. My point being is that the "best practice" recommendations for mainstream language does change.

  • by quickthrower2 on 2/28/23, 11:59 PM

    Clean code to me is like good writing. It should be easy to read and comprehend later. The rules are not rules but guidelines.

    I think OO centric rules are harmful in a word where languages support functional programming etc. Polymorphism isn’t always the best answer. A big nested if statement that reads like the business spec can be easier to follow and reason about.

    That aside if easy to understand code makes your app a bit slower, you profile to work out why and fix up the bits that matter making the tradeoff where it is needed.

    Writing code for what you think will be performant everywhere, and not caring about readability in the process is a fools errand, at least in most SaaS/Web/Business apps.

  • by suyjuris on 2/28/23, 9:43 AM

    It is also important to consider that better performance also increases your productivity as a developer. For example, you can use simpler algorithms, skip caching, and have faster iteration times. (If your code takes 1min to hit a bug, there are many debugging strategies you cannot use, compared to when it takes 1s. The same is true when you compare 1s and 10ms.)

    In the end, it is all tradeoffs. If you have a rough mental model of how code is going to perform, you can make better decisions. Of course, part of this is determining whether it matters for the specific piece of code under consideration. Often it does not.

  • by tomxor on 3/1/23, 2:52 PM

    The no.1 piece of advice I give to junior programmers now, or any programmer trying to improve, is to care about your code, everything else can naturally and more safely emerge from that one principle.

    The problem with laying down a bunch of arbitrary rules is that they never apply to all scenarios. As the person coming up with the rules you can easily re-evaluate where and when they don't work, but the novice receiving those rules wont necessarily have an intuition for the reasoning behind them yet, and so wont so considerately apply them. For everyone else, they need to understand that there is no silver bullet, no 10 commandments that will give them the best result, life is messy, and they need to think, develop their own intuitions by interrogating their own code in each new context - but it all starts with caring about your code, not being satisfied with a pile of spaghetti, or a pile of OOP just because OOP, or a pile of strictly pure functions just because FP. Every single rule or programming pattern is wrong given enough contexts, it's all subjective.

    Discussing patterns and rules is useful, but only if they are only used as a mental anchor to think about them, not some kind of axioms of programming correctness.

  • by Fell on 3/1/23, 9:43 AM

    > Our job is to write programs that run well on the hardware that we are given.

    I actually believe "the hardware that we are given" is the entire root of the problem.

    Most programmers work and test using whatever hardware is current at the time, but this is makes them blind to possible performance issues.

    Take whatever you're working on, and run it on the hardware of 5-10 years ago. If you still have a good experience, you're doing it right. If not, you should probably stop upgrading developer machines for a while.

    Whatever your minimum hardware requirements are should determine your development machines. This way, you will naturally ensure your low-end customers have a good experience while your high-end customers will have an even better experience.

    My game studio has been doing this for years. It saves money for expensive hardware, it prevents performance issues before they arise and it saves developer time for not having to overthink optimization.

  • by Arnavion on 2/28/23, 8:55 AM

    "Use subclasses over enums" must be some niche advice. I've never heard it. The youtuber seems to be referring to some specific example (he refers to specific advice from "them") so I guess there's some context in the other videos of the series.

    re: the speedup from moving from subclassing to enums - Compiler isn't pulling its weight if it can't devirtualize in such a simple program.

    re: the speedup from replacing the enum switch with a lookup table and common subexpression - Compiler isn't pulling its weight if it can't notice common subexpressions.

    So both the premise and the results seem unconvincing to me.

    Of course, he is the one with numbers and I just have an untested hypothesis, so don't believe me.

  • by kingcai on 2/28/23, 10:54 PM

    I like this post a lot, even if it's a somewhat contrived example. In particular I like his point about switch statements making it easier to pulled out shared logic vs. polymorphic code.

    There's so much emphasis on writing "clean" code (rightly so) that it's nice to hear an opposing viewpoint. I think it's a good reminder to not be dogmatic and that there are many ways to solve a problem, each with their own pros/cons. It's our job to find the best way.

  • by flavius29663 on 2/28/23, 9:46 PM

    One of the the points of "clean code" is to make it easy to find the hotspots and optimize those. Write the codebase at a very high level, plenty of abstractions etc. and then optimize the 1% that really needs it. Optimizing a small piece of software is not going again clean code, on the contrary, it re-enforces it: you can spend the time to optimize only what is necessary.
  • by softfalcon on 2/28/23, 9:33 PM

    In my personal opinion, this is less of an argument of "clean code" vs "performant code" and it seems to be more of traditional "object oriented programming" vs "data driven design".

    Ultimately though, data driven design can fit under OOP (object orient programming) as well, since it's pretty much lightweight, memory conforming structs being consumed by service classes instead of polymorphing everything into a massive cascade of inherited classes.

    The article makes a good argument against traditional 1980-90's era object oriented programming concepts where everything is a bloated, monolith class with endless inheritances, but that pattern isn't extremely common in most systems I've used recently. Which, to me, makes this feel a lot like a straw man argument, you're arguing against an incredibly out-dated "clean code" paradigm that isn't popular or common with experienced OOP developers.

    One only really has to look at Unity's data driven pipelines, Unreal's rendering services, and various other game engine examples that show clean code OOP can and does live alongside performant data-driven services in not only C++ but also C#.

    Hell, I'm even doing it in Typescript using consolidated, optimized services to cache expensive web requests across huge relational data. The only classes that exist are for data models and request contexts, the rest is services processing streams of data in and out of db/caches.

    If there is one take-away that this article validated for me though, it's that data-driven design trumps most other patterns when performance is key.

  • by vrnvu on 2/28/23, 9:01 AM

    The problem with the contemporary "clean code" concept is that the narrative that performance and efficiency don't matter has been pushed down the throat of all programmers.

    Re-usability, OOP concepts or pure functional style, design patterns, TDD or XP methodologies are the only things that matter... And if you use them you will write "clean code". Even worse, the more concepts and abstractions you apply to your code the better programmer you are!

    If you look at the history of programming and classic texts like "the art of programming", "sicp", "the elements of programming"... The concept of "beautiful code" appears a lot. This is an idea that has always existed in our culture. The main difference with the "clean code" cult is that "beautiful code" also used to mean fast and efficient code, efficient algorithms, low memory footprint... On top of the "clean code" concepts of easy to test and re-usable code, modularity... etc

  • by alfalfasprout on 2/28/23, 11:57 PM

    A lot of people here seem to be saying that it's a spectrum between "clean code" and "performant code". Even the author alludes to that. Or that most code doesn't need to be fast.

    I find that viewpoint concerning because the reality is this isn't really a dichotomy. Code can be both performant and clean (note clean is not the same as elegant).

    One thing I think is confusing people is the dogmaticism about what's "idiomatic" is especially bad in OOP-heavy languages. This is especially bad in Java where a fetishization of design patterns have led to codebases which are both ugly and unperformant.

    The reality is software design needs to consider performance from the get-go. Sure there is such a thing as "premature optimization" but if you've determined performance is a goal then you should follow best practices for high performance from the get-go. That includes not trying to perform math on iterables of objects (since that prevents vectorizatio since the data isn't contiguous), avoiding accumulations, not creating and destroying tons of objects, etc. This can all be done in a clean way! And low level code can be cleanly encapsulated so that other interfaces remain idiomatic and simple.

    A lot of people fret that this approach leads to "leaky abstractions" because implementation details inform the interface design. That just means you need to iterate on the interface design so it makes sense.

  • by bandika on 3/1/23, 6:48 AM

    I find it amusing that many corporate dev teams picks C++ for its performance / low levelness, but then reject any code that Casey's advocate for. It is extremely hard to convince them to consider these things (ie in this case cache misses and branch mispredictions).

    Now, if we consider only a conservative 2x speed-up, I might not care if my app starts up in 2s or 4s, but I do care if my device's battery last for 20h v 10h.

  • by cranium on 3/1/23, 8:45 AM

    You can optimize even further by creating a custom chip to compute the area of shapes in the order of billions per second. But what's the point? Where is the value?

    Can't say it better than Knuth:

        We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
    
    Clean code has never been about performance, it's about the other people that will read your code – future you included. Performance of people is more valuable for any product than code performance[1]. Only when the performance becomes a bottleneck, or you want to optimize energy efficiency then sure, don't pass that 3% opportunity.

    [1] I would argue that it still holds for products that need high(er) performance code like video games, embedded systems, particle physics, ... These products just happen to hit bottlenecks way faster and some have hard cutoffs (eg. 60 fps for a game). Still, not everything needs to be optimized to the extreme: the algorithm to sort an in-game inventory does not need to handle 4B+ items.

  • by hooby on 3/1/23, 6:03 AM

    Obviously, if you are doing performance-critical code in a performance-critical application, you will be doing stuff like inlining and other things that "break" clean-code "rules".

    I put that into quotes, because to me personally these aren't strict rules - but rather guidelines. And they aren't meant to be pushed to the absolute extreme - but rather be seen as methods/tools used to achieve the actual goal: easily readable, maintainable and modifiable code.

    And in my workplace, "performance" isn't measured in cpu-cycles, but rather in man-hours needed to create business value. Adding more compute power comes cheaper than needing more man-hours.

    For the most part, it still seems to be a good idea to train new developers to know and understand clean code. It will help them produce more stable, less buggy and more readable code - and that means the code they write will also be easier to optimize for performance, if necessary. But with my work, that sort of optimization seems only ever necessary for very small pieces of code - most definitely not the entire code base.

  • by Semaphor on 2/28/23, 10:49 AM

    Here is the link as an article for people like me who don’t like watching videos: https://www.computerenhance.com/p/clean-code-horrible-perfor...
  • by davnicwil on 2/28/23, 10:50 AM

    I don't think there is a contradiction or surprising point here.

    At least my understanding of the case for clean code is that developer time is a significantly more expensive resource than compute, therefore write code in a way which optimises for developers understanding and changing it, even at the expense of making it slower to run (within sensible limits etc etc).

  • by userbinator on 3/1/23, 3:26 AM

    Related recent discussion about the actual Clean Code book: https://news.ycombinator.com/item?id=34843128

    I prefer a much simpler rule: if it's easy for the CPU to execute, it's likely easy for you to read too. That means: no deep nesting, minimise branchiness (indirect calls are the worst), keep the code small and simple.

  • by fer on 2/28/23, 8:50 AM

    Unrelated to the content itself, am I the only one wondering if he has his t-shirt mirrored or if he's really skilled at writing right-to-left?

    Content wise: his examples show such increases because they're extremely tight and CPU-bound loops. Not exactly surprising.

    While there will be gains in in larger/more complex software by throwing away some maintainability practices (I don't like the term "clean code"), they will be dwarfed by the time actually spent on the operations themselves.

    Just toss a 0.01ms I/O operation in those loops; it will throw the numbers off by a large margin, then one would just rather pick sanity over the speed gains without blinking.

    That said, if a code path is hot and won't change anytime soon, by all means optimize away.

    Edit: the upload seems to have been deleted.

  • by doty on 3/1/23, 1:58 PM

    > Look, most modern software is spending 99.9% of the time waiting for user input, and 0.1% of the time actually calculating something.

    I'm sorry to say that this argument is not even wrong.

    As programmers, it is not useful to us to think about that 99.9% of time. The 0.1% of the time is literally our entire job.

    "Most of the universe is not Earth, so why do we spend so much time thinking about things on Earth?"

  • by gammalost on 2/28/23, 9:18 AM

    The video was removed and reuploaded. Here is the new link https://www.youtube.com/watch?v=tD5NrevFtbU
  • by fulafel on 3/1/23, 6:05 AM

    There are lots of writings about technical and architectural reasons code in games performs better than code in GP applications, but people often forget the top level reason: they have clear performance targets right from the start and performance is the most obvious thing (right after "not crashing") that you see about how well a game works.

    Everything follows from this. It's not that game devs are so much cleverer than other devs, they are just faced with first hand feedback of "does the game code hit the frametime budget" constantly and the whole dev org is committed to that.

  • by pkolaczk on 2/28/23, 11:52 AM

    Most of those Clean code rules are BS.

    1. Prefer polymorphism to “if/else” and “switch” - if anything, that makes code less readable, as it hides the dispatch targets. Switch/if is much more direct and explicit. And traditional OOP polymorphism like in C++ or Java makes the code extensible in one particular dimension (types) at the expense of making it non-extensible in another dimension (operations), so there is no net win or loss in that area as well. It is just a different tool, but not better/worse.

    2. Code should not know about the internals of objects it’s working with – again, that depends. Hiding the internals behind an interface is good if the complexity of the interface is way lower than the complexity of the internals. In that case the abstraction reduces the cognitive load, because you don't have to learn the internal implementation. However, the total complexity of the system modelled like that is larger, and if you introduce too many indirection levels in too many places, or if the complexity of the interfaces/abstractions is not much smaller than the complexity they hide, then the project soon becomes an overengineered mess like FizzBuzz Enterprise.

    3. Functions should be small – that's quite subjective, and also depends on the complexity of the functions. A flat (not nested) function can be large without causing issues. Also going another extreme is not good either – thousands of one-liners can be also extremely hard to read.

    4. Functions should do one thing – "one thing" is not well defined; and functions have fractal nature - they appear do more things the more closely you inspect them. This rule can be used to justify splitting any function.

    5. “DRY” - Don’t Repeat Yourself – this one is pretty good, as long as one doesn't do DRY by just matching accidentally similar code (e.g. in tests).

  • by 0xbadcafebee on 2/28/23, 10:37 PM

    "Clean" is a poor descriptor for source code.

    The word "clean" in reference to software is simply an indicator of "goodness" or "a pleasant aesthetic", as opposed to the opposite word "dirty", which we associate with undesireable features or poor health (that being another misnomer, that dirty things are unhealthy, or that clean things are healthy; neither are strictly true). "Clean" is not being used to describe a specific quality; instead it's merely "a feeling".

    Rather than call code "clean" or "dirty", we should use a more specific and measurable descriptor that can actually be met, like "quality", "best practice", "code as documentation", "high abstraction", "low complexity", etc. You can tell when something meets that criteria. But what counts as "clean" to one person may not to another, and it doesn't actually mean the end result will be better.

    "Clean" has already been abandoned when talking about other things, like STDs. "Clean" vs "Dirty" in that context implies a moral judgement on people who have STDs or don't, when in fact having an STD is often not a choice at all. By using more specific terms like "positive", or simply describing what specific STDs one has, the abstract moral judgement and unhelpful "feeling" is removed, and replaced with objective facts.

  • by jasode on 2/28/23, 9:20 AM

    The original submitted link was a youtube video that's been deleted for some reason.

    Probably a better link is the blog post because the author updated it with the new replacement video a few minutes ago as of this comment (around 09:12 UTC):

    https://www.computerenhance.com/p/clean-code-horrible-perfor...

  • by unconed on 2/28/23, 12:09 PM

    The example of using shape area seems like a poor choice.

    First off, the number of problems where having an analytical measure of shape area is important is pretty small by itself. Second, if you do need to calculate area of arbitrary shapes, then limiting yourself to formulas of the type `width * height * constant` is just not going to cut it. And this is where the entire optimization exercise eventually leads: to build a table of precomputed areas for affinely transformed outlines.

    Throw in an arbitrary polygon, and now it has to be O(n). Throw in a bezier outline and now you need to tesselate or integrate numerically.

    What this article really shows is actually what I call the curse of computer graphics: if you limit your use cases to a very specific subset, you can get seemingly enormous performance gains from it. But just a single use case, not even that exotic, can wreck the entire effort and demand a much more complex solution which may perform 10x worse.

    Example: you want to draw lines? Easy, two triangles! Unless you need corners to look good, with bevels or rounded joins, and every pixel to only be painted once.

    Game devs like to pride themselves on their performance chops, but often this is a case of, if not the wrong abstraction, at least too bespoke an abstraction to allow future reuse irrespective of the use case.

    This leads to a lot of dickswinging over code that is, after sufficient encounters with the real world, and sufficient iterations, horrible to maintain and use as a foundation.

    So caveat emptor. Framing this as a case of clean vs messy code misses the reason people try to abstract in the first place. OO and classes have issues, but performance is not the most important one at all.

  • by tippytippytango on 2/28/23, 9:41 AM

    I wish software engineering cared a lot more that we have no way of measuring how clean code is. Much less any study that measures the tradeoffs of clean code and other concerns, like a real engineering discipline.
  • by jupp0r on 2/28/23, 8:48 PM

    There's a tradeoff. Engineering time is expensive. Machine time can be expensive too. We need to optimize these costs by making most code that's not performance relevant easy to read and then optimize performance critical code paths while hiding optimization complexity behind abstractions. Either extreme is not helpful as a blanket method.
  • by hennell on 2/28/23, 9:08 AM

    I think he's really underplaying the main selling point of clean code - the objective of writing clear maintainable, extendable code. His code was faster, but sometimes how it compares for adding new features or fixing bugs by people new to a code base is where you want to optimize.

    Should performance be talked about more? Yes. Does this show valuable performance benifits? Also yes. Is performance where you want to start your focus? In my experience, often no.

    I've made things faster by simplifying them down once I've found a solution. I've also made things slower in order to make them more extendable. If you treat clean code like a bible of unbreakable laws you're causing problems, if you treat performance as the be-all-end-all you're also causing problems, just in a different way.

    It's given me something to think about, but I wish it was a more fair handed comparison showing the trade offs of each approach.

  • by Octokiddie on 2/28/23, 11:16 PM

    > So by violating the first rule of clean code — which is one of its central tenants — we are able to drop from 35 cycles per shape to 24 cycles per shape, impling that code following that rule number is 1.5x slower than code that doesn’t. To put that in in hardware terms, it would be like taking an iPhone 14 Pro Max and reducing it to an iPhone 11 Pro Max. It's three or four years of hardware evolution erased because somebody said to use polymorphism instead of switch statements.

    The benchmark is a tight loop where the vtable lookup is a big chunk of the total computation. I don't think one can extrapolate this 1.5x improvement to real code. If anything, it represents an upper bound on the performance improvement you might expect to see.

    I also didn't see anything about how the code was compiled. Various optimizations could affect performance in meaningful ways.

  • by theknarf on 2/28/23, 10:27 AM

    This seems more like an argument against the object oriented model of C++ than anything else. Would have been more interesting if the performance was compared to languages like Rust.
  • by readthenotes1 on 2/28/23, 3:32 PM

    "it is easier to make working code fast than to make fast code work"

    That was from either the 1960s or the 1970s and I don't know that anything has changed in the human ability to read a mangled mess of someone's premature optimizations.

    "Make it work. Make it work right. Make it work fast." how to apply the observation above...

  • by adam_arthur on 2/28/23, 9:23 PM

    Most performance optimized code can be abstracted such that it reads cleanly, regardless of how low level the internals get. This is the entire premise of the Rust compiler/"zero cost abstractions". Or how numpy is vectorized under the hood. No need for the user to be exposed to this.

    Writing "poor code" to make perf gains is largely unnecessary. Though there are certainly micro-optimizations that can be made by avoiding specific types of abstractions.

    The lower level the code, the more variable naming/function encapsulation (which gets inlined), is needed for the code to read cleanly. Most scientific computing/algorithmic code is written very unreadably, needlessly.

  • by devdude1337 on 2/28/23, 6:59 AM

    Most C++-projects I dealt with are neither clean nor performant. I rather follow clean code to improve maintainability and get things done than to optimize for performance. It’s also easier to find bottlenecks in a well readable and testable code base than in a premature-optimized one. However it is true that the more abstractions and indirections used software gets slower. Also these examples are too basic to make a real-world suggestion: never assume sonething is slow in a large project because of indirection or something…always get a profiler involved and run tests with time requirements to identify and fix slow running parts of a program.
  • by rohith2506 on 3/1/23, 10:10 AM

    Damn, people are going bananas left and right about this article. I don't think Casey is not targeting general programmer audience where sub millisecond performance does not matter as long as the user experience / business needs are satisfied but this is highly relevant in the world of HFT where you will try to optimise every instruction you execute.

    He never mentioned that people should write horrible code for performance but more like pointing out. Personally and professionally, We stay away from virtual functions as long as possible due to unnecessary vTable lookup every time you want to call a method

  • by gwbas1c on 2/28/23, 8:52 PM

    Two years ago I was subjected to NDepend: A clean-code checker and enforcer.

    Their tool was so dog slow I could see it paint the screen on a modern computer.

    I rejoiced when we yanked it out of our toolchain. Most of the advice that it gave was unambiguously wrong.

  • by razzimatazz on 3/2/23, 9:19 PM

    I think the post invites a question to all the HN responders: What would it take for the computer [software] you are using to run at its full blazing potential?

    The answer is - if every single piece of software was written while already knowing the true requirements, the scope of its use and re-use, and knowing the future bugs and security flaws that would appear, then it could be written one time and be BLAZING FAST.

    Many parts would still be written in a 'Clean code' style for the necessary extensibility and testability, etc. But many others would be small and near optimal.

    THEN on top of that, if the author or an equivalent talent came along and rewrote or supervised the optimization of the regular software, similar to how the article does, your system would be HYPER INSANE BLAZING FAST.

    If we are proponents of OOP or Clean code, we need to acknowledge that fact. (i.e. My code may not be important but it all contributes to slowing down the computing world). And if we think the Author is preaching gospel here, you should also acknowledge that because the future is so often unknown when we write code we often have no choice but to fill it with Clean code that can be easily changed later, and sometimes even 'Shit code' that we thought would never be used by anyone.

  • by mgaunard on 2/28/23, 8:58 PM

    Most programming tends to decompose doing an operation for each element in a sequence into implementing the operation for a single element and then doing that repeatedly in a loop.

    This is obviously wrong for performance reasons, as operations tend to have high latency but multiple of them can run in parallel, so many optimizations are possible if you target bandwidth instead.

    There are many languages (and libraries) that are array-based though, and which translate somewhat better to how number crunching can be done fast, while still offering pleasant high-level interfaces.

  • by veidelis on 3/1/23, 10:05 AM

    I agree with Casey.

    I think he understands way much more than how to optimize for 60fps as many commentators here point out.

  • by evalda on 3/1/23, 10:02 AM

    Apart from performance, I find that "non-clean" code (in terms of the article) is sometimes easier to understand, reason about and maintain. Context is important of cause...
  • by kazinator on 3/1/23, 1:50 AM

    In plain C we can make virtual function calls faster by forwarding the pointers into the object instance.

    Say we have this:

       int obj_api(object *o, char *arg)
       {
         return o->ops->api(o, arg);
       }
    
    that's representative of how C++ virtual functions are commonly implemented. It gets more hairy under multiple inheritance and such.

    It requires several dependent pointer loads. We must access the object to retrieve its ops pointer (the vtable) and then access the vtable to get the pointer to the function, and finally branch there.

    To call that function a little faster we can go to this:

       int obj_api(object *o, char *arg)
       {
         return o->api(o, arg);
       }
    
    in other words, forward the api function pointer from the static table to the object instance. Ok, so now each time we construct a new object, we must initialize o->api. And the pointer takes up space in each instance. So there is a cost to it. But it blows away one dependent load. And the "clean" structure of the program has not changed; it has the same design with virtual functions and all.

    We could do this for some select functions that could benefit from being dispatched a little faster.

    I don't think there is a way in C++ to tell the compiler that we'd like a certain virtual function to be implemented faster, at the cost of taking up more space in the object instance and/or more time at object construction time.

  • by Falconerd_ on 3/1/23, 12:11 AM

    Reading the comments it seems like a lot of people missed this part.

    > We can still try to come up with rules of thumb that help keep code organized, easy to maintain, and easy to read. Those aren't bad goals! But these rules ain’t it. They need to stop being said unless they are accompanied by a big old asterisk that says, “and your code will get 15 times slower or more when you do them.”

    He isn't against organised and maintainable code, he just thinks the current definition isn't worth the trade-off.

  • by kazinator on 3/1/23, 1:41 AM

    For all the creeping featuritis that C++ is acquiring like a dirty snowball, doesn't it have a solution for this yet?

        virtual u32 CornerCount() = 0;
    
    you should be able to declare a virtual data member

        virtual u32 CornerCount;  // default value zero
    
    how this would be implemented is that it simply goes into the vtable. ptr->CornerCount retrieves the vtable from the object, and CornerCount is found at some offset in that table, just like a virtual function pointer would be.

    There is no need to pull out a function pointer and jump to it.

    In C I would do it like this

       // Every shape has a pointer to its own type's static instance of this:
    
       struct shape_ops {
          unsigned (*area)(struct shape *);
          unsigned corner_count;
       }
    
    
       // get_area looks like this:
    
       unsigned shape_area(struct shape *s)
       {
          return s->ops->area(s);
       }
    
       // the corner count isn't calculated so it's just
    
       unsigned shape_corner_count(struct shape *s)
       {
          return s->ops->corner_count;
       }
    
    Everyone can override corner_count with their value. What you can't do is implement a calculation which determines the corner count dynamically, but that can be a reasonable constraint.
  • by vborovikov on 2/28/23, 2:49 PM

    What is the author suggesting? To write software using infinite loops changing global state? Makes sense for video games but not for the custom enterprise software where clean code practices are usually applied.

    The enterprise code must be easy to change because it deals with the external data sources and devices, integration into human processes, and constantly changing end-user needs. Clean code practices allow that, it's not about CPU performance and memory optimizations at all.

  • by davidgrenier on 3/1/23, 12:25 PM

    This thread is surprisingly back today with millions of comments. I don't know if anyone has pointed out that the functions called... do nothing. Hence it is understandable that the performance profile is dominated by dynamic dispatch.

    Also, his code must have been compiled with an old compiler or less than -O3 as the switch/table version of the code performs exactly the same with Clang and g++ when compiled with -O3.

    disclaimer: not a fan of OO regardless.

  • by kgeist on 3/1/23, 6:19 AM

    In my experience of writing enterprise software, the main offender is N+1 query problem at an API boundary. I.e. when a module/package exposes only a method to process items one-by-one. In case you suddenly want to process 1000 items instead of 5, you'll end up having 1000 separate DB calls, HTTP calls etc. Same applies for gamedev where the author is coming from: a naive renderer could switch shaders individually for every object when you want to sort by shader and switch only a few times. When peformance suffers, you have to change 2+ modules (the client and the server) to support batch operations and a lot of programmers don't have time to do it or simply can't do it because they use a third-party module/service they can't change. Inside a module you can default to clean code or switch to an optimized version if need arises. In a small, well encapsulated/defined module you can write very simple code without overengineered abstractions because it covers a simple model which doesn't need too much abstraction. So my take is write small modules, design abstractions at the API boundary, and always expose batch operations.
  • by ummonk on 3/1/23, 5:09 AM

    Just in terms of readability and maintainability I find polymorphism to be significantly worse than switch-statements. It's hard to locate all the implementations of a particular function and read through them and edit them when they aren't in one place in a single switch statement. Higher performance is merely extra icing on the cake when using switch statements over polymorphism.
  • by civilized on 3/1/23, 1:14 AM

    My biggest confusion with the "clean code" concept is, what does clean mean? Such a vague concept seems to invite arbitrary bikeshedding over how many lines a function should have, whether comments are good, etc.

    In a kitchen, clean is a pretty objective concept: no dirt or grime, objects put away with similar objects. Not sure what it means in code, but it seems many people have strong, conflicting, subjective opinions about it. Doesn't seem like a good recipe for productivity or alignment.

    I feel like it would be wiser to limit the concept of clean to the eradication of obviously "dirty" or "cluttered" things, like inconsistent style, or naming a module in a way that is misleading about its contents or functionality. Just as all different kinds of buildings can be clean, a code of "cleanliness" should not be so comprehensively prescriptive about architecture and organization. Use more appropriate names for those dimensions of code quality, rather than "clean" as the single stand-in for every good thing.

  • by captainmuon on 2/28/23, 12:26 PM

    Already in his first example, where he says he doesn't use range-based for in order to help the compiler and get a charitable result, he doesn't get the point, I think. You write code in a certain way in order to be able to use abstractions like range-based for, or functional style. If you are hand-unrolling the loop, or using a switch statement instead of polymorphism, you loose the ability to use that abstraction.

    Esentially the whole point of object orientation is to enable polymorphism without having big switch statements at each call site. (That, and encapsulation, and nice method call syntax.) When people dislike object orientation, it's often because they don't get or at least don't like polymorphism.

    Most people, most of the time, don't have to think about stuff like cache coherency. It is way more important to think about algorithmic complexity, and correctness. And then, if you find your code is too slow, and after profiling, you can think about inlining stuff or using structs-of-arrays instead of arrays-of-structs and so on.

  • by KyeRussell on 2/28/23, 2:22 PM

    David Farley’s new book is good. It advocates for the tenants of “clean code” (at least in all lowercase), but given his background I trust that he knows how to balance performance and code hygiene.

    There are people that are wrong on both extremes, obviously. I’ve worked with one too many people that quite clearly have a deficient understanding of software patterns and try to pass it off as being contrarian speed freaks. Just as I’ve worked with architecture astronauts.

    I’m particularly skeptical of YouTubers that fall so strongly on this side of the argument because there’s a glut of “educators” out there that haven’t done anything more than self-guided toy projects or work on startups whose codebase doesn’t need to last more than a few years. Not to say that this guy falls into those two buckets. I honestly don’t think I know him at all, and I’m bad with names. So I’m totally prepared for someone to come in and throw his credentials in my face. I can only have so much professional respect for someone that is this…dramatic about something though.

  • by strken on 3/1/23, 9:20 AM

    I don't think the tradeoff here is between clean-but-slow code and fast-but-dirty code. It looks more like extensible-but-slow vs fast-but-locked-in. This is pretty obvious - that's why you need the indirection! It's not to satisfy some arbitrary aesthetic principle of cleanliness, it's to make the concept of a shape extensible to anything the calling code wants.

    Within one codebase the two behave the same because you can just go rewrite your functions, but if those functions are locked away in someone else's library the "dirty" code flat-out prevents you from ever using more shapes than the library author implemented. Want to add a rhombus? An arbitrary polygon? An ellipsoid? Something defined by Bezier curves for some reason? Well, you just can't; sorry champ.

    It's an interesting tradeoff to consider, though. Perhaps we write code like library authors too often, or optimise for extensibility when it isn't needed.

  • by beeforpork on 3/1/23, 12:43 PM

    Generally,it is good to give advice to write clean code. Undoubtedly, clean code causes less problems than unclean code (e.g., overengineered, overmodularized, or prematurely optimised).

    Running speed tests on the much-cited mini-example with geometric shapes and their area is unfair and unrealistic, and it does not prove any point.

    I think I can see where this is coming from: 'overly clean' OO style will split concerns into virtual one-liner functions without context distributed throughout the universe. For a simple problem, I prefer 'switch'. But that's not a good rule either. For anything extensible, like a GUI, 'switch' would be the wrong choice and virtual much better.

    Programmers need to develop a feeling of appropriateness, and restructuring may be necesaary at times.

    BTW, the manual loop unrolling in the article is broken and not advisable at all. I'd be angry in code review about such 'optimisations'.

  • by Dr-NULL on 3/1/23, 4:07 AM

    Not gonna lie, but the first example of using switch instead of polymorphism still looks clean and easy to understand for me.
  • by maerF0x0 on 2/28/23, 11:24 PM

    If instead of measuring the benchmark of a specific optimized code against a non-optimized code we instead measure the time when the user gets their answer in many cases the non-optimized code will be several months faster. Why? Because it takes time to do optimizations and I can ship the non-optimized sooner.

    Similarly we can then look at an iterated design and realize the optimized code is frequently going to be harder to refactor or understand (a precondition of refactoring). So now the time to when a customer gets their answer is delayed again.

    Optimization step comes long after clean code. Clean code is most useful in the first 2 of the typical 3 steps[1]

    1. Make it work (iterations of what working even means) 2. Make it right (iterations of what right even means) 3. Make it fast.

    [1]:https://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast

  • by zX41ZdbW on 2/28/23, 10:49 PM

    You don't have to give up clean code to achieve high performance. The complexity can be isolated and contained.

    For example, take a look at ClickHouse codebase: https://github.com/ClickHouse/ClickHouse/

    There is all sort of things: leaky abstractions, specializations for optimistic fast paths, dispatching on algorithms based on data distribution, runtime CPU dispatching, etc. Video: https://www.youtube.com/watch?v=ZOZQCQEtrz8

    But: it has clean interfaces, virtual calls, factories... And, most importantly - a lot of code comments. And when you have a horrible piece of complexity, it can be isolated into a single file and will annoy you only when you need to edit that part of the project.

    Disclaimer. I'm promoting ClickHouse because it deserves that.

  • by oakpond on 2/28/23, 11:16 PM

    I'm sympathetic to the rebellion against 'clean code'.

    I think this obsession with clean code is a natural reaction to the overwhelming number of gotchas that seem to just come with the job.

    It's a bit like when a parent watches their kid get hurt outside the house and in a complete overreaction locks the kid in the house for life.

    Thing is, if I'm programming an airplane control system, I very much want that kind of pedantry I think. I really don't want to make a single mistake writing that kind of program. If I'm programming a video game, just let me write the code that I want. Nobody's going to die if it blows up in my face.

    I'm not sure what should be the lesson from all this... Perhaps don't pick C++ unless you absolutely need to?

  • by winkelwagen on 2/28/23, 12:19 PM

    This is the first video I’ve seen by him. I’m by no means a fan of clean code. But I think he’s making a fool of himself here. Picking out 1 code example from the book doesn’t proof that much on its own. This stuff is so language, os, hardware and compiler specific anyway.

    The iPhone comparisons are extremely cringe. Real application do so much more then this contrived example. Something that feels fast isn’t the same thing as something is fast.

    Would I advise beginner programmer’s to read this book? Sure, let them think about ways to structure code.

    If he just had concluded with, that it is important to optimize for the right thing that would be fine. But he seems more interested in picking a fight with clean code.

    And yes performance is a lost art in programming

  • by lysecret on 3/1/23, 3:40 PM

    Well in my experience it is generally true that when you start optimising things get less clean. Let me explain: most of the optimisation situations I had looked like this. Hey this query is pretty slow and costs us quite a bit. Oh look for this type of data it’s super easy we can just return this and then the other rest of the data we can now assume this. So you have broken a single clean and nice case into two slightly less clean but faster cases. And this breaking apart then continues becoming less and less clean because you rely on some obscure characteristic of that specific type of data.
  • by axilmar on 2/28/23, 11:23 PM

    In C++, you can have clean code and performance, by utilizing templates.

    In the example given, all polymorphism can be removed, and the shapes can be stored in std::tuple structure.

    And then the operations would be faster than C, since no switch statement would be needed.

  • by e-dant on 3/1/23, 1:30 PM

    For the record, runtime polymorphism is generally frowned upon in the most modern C++ practice.

    The only difference between “modern, clean” C++ and the author’s switch is probably a concept that requires some type attributes.

    The example is contrived, and the realization of “clean” code through runtime polymorphism is both dangerous and odd. The whole point of not using polymorphism is to catch runtime crashes at compile time, reduce overhead and improve readability. I know many people who wouldn’t use an object here anyway. Free functions would do nicely, and are infinitely compositional.

  • by athanagor2 on 2/28/23, 8:40 AM

    The performance difference is truly savage.

    But I think there is a reason for the existence of "clean code" practices: it makes devs easier to replace. Plus it may create a market to try to optimize intrinsically slow programs!

  • by scotty79 on 2/28/23, 10:07 PM

    Personally I think we'd be in a better spot today if instead of class hierarchies and polymorphism the thing that would go mainstream was Entity-Component-System approach with composition instead of inheritance.
  • by DeathArrow on 3/1/23, 10:34 AM

  • by DeathArrow on 3/1/23, 7:25 AM

    If null was one billion dollar mistake, clean code, SOLID and design patterns are 10 billion dollar mistakes. Think of all CPU cycles wasted across all the data centers and user's devices.
  • by rudolph9 on 2/28/23, 10:49 PM

    > We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%

    https://dl.acm.org/doi/10.1145/356635.356640

    The author of the post fails to articulate how we strike a healthy balance and instead comes up with contrived examples to prove points that only really apply to contrived examples.

  • by ctx_matters on 2/28/23, 2:59 PM

    As always in programming discussions - context matters.

    https://trolololo.xyz/programming-discussions

  • by Ultimatt on 3/1/23, 7:17 AM

    This problem feels like a no runtime type problem. Highly dynamic languages have tonnes of grim problems like this, that they have to deal with because there is no good type information anywhere so stats at runtime is how you optimise. Raku for example has a lot of specialising runtime optimisations where all the virtual function calls get specialised and become dynamic by exception when the VM executes code.
  • by yarg on 2/28/23, 1:00 PM

    He's picking holes in example code. Example code will often tell you how to do a thing, and not why to do that thing.

    And he argues that it's not a straw man.

  • by flippinburgers on 2/28/23, 11:29 AM

    "Clean code", "Clean architecture", "Clean etc", they are all totally grotesque and the sign of incompetence.
  • by crabbone on 3/1/23, 3:17 PM

    I patiently waited until the end of this video, hoping there'd be a punchline... but, turns out it's one of those C++ selfawarewolves kind of thing.

    I mean, dude discovered C++ compiler sucks after over 40 years of trying to make it not suck so much, but ignores the fact that his tools are broken and proceeds to make completely unwarranted conclusions from that.

    Needless to mention that software needs to be first and foremost correct. "Clean code" is about reducing the chance of a programmer of making certain kinds of mistakes. And even in the situation where the compiler sucks, it's still worth doing / paying the price in terms of speed, if you can get more confidence of your code doing what it's supposed to. Just like structured programming, "clean code" is an attempt to reduce complexity the author of the code has to deal with.

    ----

    The proper conclusion that should've been the result of his experiments should've been: maybe something went wrong with the language and tools I'm using that even after a massive effort over several generation of programmers and mega-corporations backing that effort, the tools and the language still suck. So, the desirable properties of my programs (i.e. simplicity and ability to be extended) still come at a huge cost.

  • by Mizoguchi on 2/28/23, 11:14 PM

    The problem with this is that in most real world scenarios it is much cheaper to add more hardware resources to slow performing apps than hiring, training and retaining programmers to learn, debug, enhance and maintain poorly written code, particularly when the useful life of many software solutions can last decades and hardware becomes cheaper every year.
  • by Myrmornis on 3/1/23, 10:13 PM

    Sometimes I see very good programmers writing functions that IMO are much much too long and have far too much internal state for one function. I believe there might be a correlation between these people having C++ backgrounds. Other than that I'm just mentioning it as an observation.
  • by puterich123 on 3/1/23, 6:24 PM

    He’s basically showing data oriented design, where you, try to limit cpu cache misses by operating on the data.

    This approach can be way faster, but is only relevant when you have a lot of entities you need to iterate over. If you have 3-100 objects it would of course still be faster but by negligible amount

  • by sebstefan on 2/28/23, 11:56 AM

    I've seen "Composition over inheritance" more times than I've seen "Polymorphism is good"
  • by gabssnake on 3/1/23, 12:47 PM

    The author of the video is apparently referring to Uncle Bob’s first book (Clean code, 2009), which essentially says that “Clean” code is “understandable” code: created with care, thinking about the next reader.

    So yeah, the book then goes on for a painful 450 page ramble, opinions, and admittedly arbitrary rules. But Martin was at least partially aware of this:

    > “Clean code is not written by following a set of rules” — quote from the book!

    So really, the person in the video failed to apply the Principle of Charity, which is fundamental in critical thinking. They end up not addressing the interesting claim, and openly attacking a Straw Man.

    As for the deeper points implied in the video, they seem –ironically– less fresh:

    - Software is slow these days - Performance matters - The way you write code impacts performance - Don't blindly follow rules and generic advice

    Groundbreaking!

    If anything, the video shows the failures of C++ as a language. Why aren't languages designed to promote maintainability without sacrificing performance? :Rust enters the room:

    The more interesting claim that the video's author missed:

    > “It is not enough for code to work.” ― quote from the book

  • by eithed on 2/28/23, 7:03 PM

    Given that code in editor is for human consumption I wonder why can't it be restructured for compiler to make it fast. (Or why compiler can't make it fast). After all - you could leave annotations re, for example structs so compiler knows what will be their size, so can optimize for it
  • by helpfulmandrill on 2/28/23, 12:00 PM

    I take issue with the idea that maintainable code is about "making programmers' lives easier", rather than "making code that is correct and is easy to keep correct as it evolves". Correctness matters for the user - indeed, sometimes it is a matter of life and death.
  • by panny on 2/28/23, 11:08 PM

    Maintainable code or performant code, yes. That's always a tradeoff. The most high performance code will be manually tuned assembly, but I don't see author writing in assembly, so he's already made some tradeoff against performance. It's all down to your priorities.
  • by rossjudson on 3/1/23, 2:55 AM

    Lots of very correct things said there...except for this:

    "The more you use the “clean” code methodology, the less a compiler is able to see what you're doing. Everything is in separate translation units, behind virtual function calls, etc. No matter how smart the compiler is, there’s very little it can do with that kind of code."

    I suppose even that is true, but JIT compilation regularly walks right around those problems. Yes, your code is written to say virtual this, or override that...but the JIT don't care. Is it looking at a monomorphic call site? Or even if it's not, is it ok to think of it as monomorphic right now? Great -- inline away.

    All that being said...I once got into a readability tiff over the use of a Java enum in a particularly performance sensitive chunk of code. I went with ints so I could be very, very explicit about exactly what I wanted, and the rather large performance gain...and lost. Yay!

    Your mileage may vary, and your measurements may vary.

  • by garganzol on 2/28/23, 10:19 PM

    The article gives an advice from the past. Nowadays it is all about zero-cost abstractions and automatic optimization. These trends will only solidify in the future defining the new norm. And until that future fully arrives, optimize for your bottlenecks.
  • by tonymet on 2/28/23, 11:42 PM

    I worked at a company where the client devs had a gaming background and the server devs had a web background.

    The gaming devs were obsessed with framerates and efficiency while server devs wanted to decouple modularize everything.

    There's no solutions only tradeoffs

  • by t43562 on 2/28/23, 9:19 PM

    One can be tempted to like any assault on "Uncle Bob"'s insulting videos in the light of working on a codebase where every 2nd line forces you to jump somewhere else to understand what it does. That sort of thing generates a rebellious feeling.

    OTOH the class design lets someone come and add their new shape without needing to change the original code - so it could be part of a library that can be extended and the individual is only concerned about the complexity of the piece they're adding rather than the whole thing.

    That lets lots of people work on adding shapes simultaneously without having to work on the same source files.

    If you don't need this then what would be the point of doing it? Only fear that you might need it later. That's the whole problem with designing things - you don't always know what future will require.

  • by MikeCampo on 2/28/23, 4:57 PM

    That was enjoyable and I'm happy to see it triggering the basement dwellers.
  • by larsonnn on 3/1/23, 1:05 AM

    When you think, it’s not worth it, Try to imagine when your software not run once, but runs a few thousand times or more per second.

    So by having some operations exceptional faster you could not only save time also you save energy.

  • by notShabu on 3/1/23, 12:54 AM

    "How to Produce Code" on a spectrum of efficiency -> abstraction

    Binary Assembly ... ... C++ ... ... Python ... ... Product Manager speaking with words: "Can you make it have more pizazz?

  • by sebastianconcpt on 3/1/23, 1:35 PM

    This is promoting early optimization, precisely to the people that needs to evade doing that. Added to the pile of #HorrificAdvice and #TerribleGeneralization.
  • by juliangmp on 2/28/23, 10:41 AM

    This example exists in such a vacuum and is so distant from real software tasks that I just have to shake my head at the "clean code is undoing 12 years of hardware evolution"
  • by gumby on 2/28/23, 8:41 PM

    These days the cost of a programmer is probably a lot greater than the cost of execution, so some of these rules ("prefer polymorphism") are likely worth the tradeoff.
  • by Pesthuf on 3/2/23, 6:37 PM

    I wonder if a compiler that uses LTO could optimize these vtable calls to the same kind of code the switch statement can be optimized to.
  • by rullelito on 3/1/23, 6:56 AM

    I work at a FANG with products that have 10M-100M users, and I probably design code for performance < 1% of the time. I suspect this is the norm.
  • by kybernetyk on 3/1/23, 10:03 AM

    I love by a simple rule: is code being called repeatedly many times a second? Make it performant. Is code called rarely? Make it clean
  • by Pr0ject217 on 2/28/23, 10:49 PM

    Casey brings another perspective. It's great.
  • by davydm on 3/1/23, 9:14 AM

    Yay, yet another contrived example to support somebody's position that a process which works fantastically in at least 90% of the time - and yes, I say 90% of the time (as a low-ball) because there _is no perfect process, framework, ideology, <insert x here>_. Everything has compromise. The compromise to make all code chase performance at the cost of maintainability is rubbish as a blanket choice, but may be necessary in certain niche situations.
  • by formvoltron on 2/28/23, 9:43 AM

  • by tialaramex on 2/28/23, 1:18 PM

    Malicious compliance for C++ programmers. This is the person who thinks they're clever for breaking stuff because "You didn't say not to". Managing them out of your team is likely to be the biggest productivity boost you can achieve.

    In the process of "improving" the performance of their arbitrary benchmark they make the system into an unmaintainable mess. They can persuade themselves it's still fine because this is only a toy example, but notice how e.g. squares grow a distinct height and width early in this work which could get out of sync even though that's not what a "square" is? What's that for? It made it easier to write their messy "more performant" code.

    But they're not done, when they "imagine" that somehow the program now needs to add exactly a feature which they can implement easily with their spaghetti, they present it as "giving the benefit of the doubt" to call two virtual functions via multiple indirection but in fact they've made performance substantially worse compared to the single case that clean code would actually insist on here.

    There are two options here, one is this person hasn't the faintest idea what they're doing, don't let them anywhere near anything performance sensitive, or - perhaps worse - they know exactly what they're doing and they intentionally made this worse, in which case that advice should be even stronger.

    Since we're talking about clean code here, a more useful example would be what happens if I add two more shapes, let's say "Lozenge w/ adjustable curve radius" and "Hollow Box" ? Alas, the tables are now completely useless, so the "performant" code needs to be substantially rewritten, but the original Clean style suits such a change just fine, demonstrating why this style exists.

    Most of us work in an environment where surprising - even astonishing - customer requirements are often discovered during development and maintenance. All those "Myths programmers believe about..." lists are going to hit you sooner or later. As a result it's very difficult to design software in a way that can accommodate new information rather than needing a rewrite, and yet since developing software is so expensive that's a necessary goal. Clean coding reduces the chance that when you say "Customer said this is exactly what they want, except they need a Lozenge" the engineers start weeping because they've never imagined the shape might be a lozenge and so they hard coded this "it's just a table" philosophy and now much of the software must be rewritten.

    Ultimately, rather than "Write code in this style I like, I promise it will go fast" which is what you see here, and from numerous other practitioners in this space, focus more on data structures and algorithms. You can throw away a lot more than a factor of twenty performance from having code that ends up N^3 when it only needed to be N log N or that ends up cache thrashing when it needn't.

    One good thing in this video: They do at least measure. Measure three times, mark twice, cut only once. The engineering effort to actually make the cut is considerable, don't waste that effort by guessing what needs changing, measure.

  • by theK on 3/1/23, 6:43 AM

    Just keep in mind that actually keeping that 1.5x or even 10x performance boost you need to apply these consistently to a laaarge code base (performance critical apps tend to be this).

    This means that in 2-3 months you end up with a codebase that is very difficult to work with, team members tripping over each other due to bad deps and abstractions and your iteration time start shooting up.

    Doesn't seem like a realistic avenue to choose except maybe when coding to a final spec?

  • by elvispt on 3/2/23, 12:09 PM

    Clean code is about developer performance - understand stuff fast - not about hardware performance.
  • by gardenhedge on 2/28/23, 11:48 PM

    Fighting a lost battle unfortunately. Non-technical managers know what clean code is and are able to discuss it
  • by mulmboy on 2/28/23, 9:09 AM

    Really enjoyed watching some guy Breathlessly discover data oriented design https://en.wikipedia.org/wiki/Data-oriented_design

    there's nothing novel in this video, really nothing to do with clean code. This is same sort of thing you see with pure python versus numpy

  • by tiffanyh on 3/1/23, 12:35 AM

    No OpenBSD reference?

    It has extremely clean & easy to discern code. But it’s also not the most performant.

  • by buster3000 on 3/2/23, 12:02 PM

    Ehh. I think the author is a little blinkered here based on these examples.
  • by hotBacteria on 2/28/23, 10:00 AM

    What is demonstrated here is that if you understand well the different parts of some code, you can recombine them in more efficient ways.

    This is something very good to have in mind, but it must be applied strategically. Avoiding "clean code" everywhere won't always provide huge performances win and will surely hurt maintainability.

  • by baranoff on 2/28/23, 11:45 PM

    Such a misguided article. What i constantly fight at work is poorly written unmaintainable code. The code that needs to be fast is 1% or less. Use three step rule when implementing something: 1. Make it work 2. Make it pretty 3. Make it fast (measure and optimize where it matters)
  • by pshirshov on 2/28/23, 10:13 PM

    In 98% of the cases there is no difference between O(N) and O(N/4)
  • by gregjor on 2/28/23, 6:34 AM

    Brilliant. Thanks.

    tl;dr polymorphism, indirection, excessive function calls and branching create a worst-case for modern hardware.

  • by andix on 2/28/23, 10:27 PM

    Don’t optimize early. 99% of code doesn’t have to be fast, it has to be right. And as code needs to be maintained, it also needs to be easy to read/change, so it stays right.

    You shouldn’t do things that make your code utterly slow though.

  • by glintik on 2/28/23, 9:31 PM

    Most horrible mistake is not in the list: immutability.
  • by dym_sh on 3/1/23, 2:17 AM

    hey, what if had some kind of optimization step which would take clean code and make it more about performance than maintainability
  • by dmtroyer on 3/1/23, 7:29 PM

    Glad I don’t work with this person.
  • by hiccuphippo on 3/1/23, 12:26 AM

    My one pushback against this is: we have a CDN, no amount of optimizations is gonna beat having the result already in memory and return that.
  • by brlebtag on 3/1/23, 11:13 AM

    Omg... I am pretty sure that jumps are more efficient than if statements... I did not see him trying this too...
  • by bonede on 3/1/23, 1:29 PM

    yes, he's looking at you, web and app developers
  • by bonede on 3/1/23, 1:28 PM

    yes, he's looking at you, app and web developers
  • by xwdv on 3/1/23, 1:20 AM

    Wow, every word in this article was wrong.
  • by 29athrowaway on 2/28/23, 11:37 PM

    Premature optimization is often a bad idea.
  • by datadeft on 2/28/23, 10:51 AM

    TL;DR:

    "Game developer optimizes code for execution as opposed to readability that 'clean-code' people suggest".

    There are few considerations:

    - most code is not CPU bound so his claims that you are eroding progress because you are not optimizing for CPU efficiency is baseless

    - writing readable code is more important than writing super optimal code (few exceptions: gaming is one)

    - using enums vs OOP is not changing the readability at least to me

    I think we can have fast and readable code without following the 'clean-code' principles and at the end it does not matter how much gain we have CPU cycle-wise.

  • by totorovirus on 2/28/23, 9:25 PM

    I think juniors should write clean code until they can negotiate the technical debt against the speed
  • by charles_f on 3/1/23, 2:35 AM

    > Prefer polymorphism to “if/else” and “switch”

    Wtf? First time I hear about this one, and it sounds like a dumb dogma.

    > It’s a base class for a shape with a few specific shapes derived from it: circle, triangle, rectangle, square. We then have a virtual function that computes the area.

    Quite literally the first and simplest example for why you should prefer composition over inheritance[^1] (that and ducks and chickens).

    Good strawman. I am unconvinced.

    1: https://en.m.wikipedia.org/wiki/Composition_over_inheritance

  • by allmadhare on 2/28/23, 11:08 PM

    Maintainability and performance are often at odds, but that doesn't mean you should throw out one for the other in every case, and I don't think that's what people like Robert C. Martin were ever intending with Clean Code.

    It's like database denormalization, it may violate normalization principals but it when applied to a well designed database is a valid optimization technique when done with proper understanding of the implications of said optimizations.

    More importantly though, we are willing to sacrifice raw performance for developer experience and higher maintainability because developer time is expensive, and most stakeholders would prefer that you can add feature xyz in a reasonable time, over feature xyz running marginally faster. If ease of development and maintenance weren't important, we'd just write everything in assembly and bypass all these abstractions altogether.