by brhsagain on 4/26/23, 7:42 PM with 125 comments
by tomas789 on 4/27/23, 6:00 AM
If the app would scale a lot I would still need to go into all that trouble but at this early stage the benefits of really simple infrastructure is immense.
by JanneVee on 4/27/23, 6:42 AM
by charlie0 on 4/27/23, 5:11 AM
by umanwizard on 4/27/23, 6:30 AM
If you care about performance from the beginning, you would never even get to the point where you’re saying “oh crap, our messenger app can barely run, let’s rewrite it in C”.
by magicalhippo on 4/27/23, 3:59 AM
Need to loop over some order lines and find distinct article numbers? Use a hash-based set with O(1) access, not just a list which will have O(n). If not you'll end up writing an O(n^2) routine for no good reason, which will work swimmingly on you 10 line test order and cause grief in production.
I don't think a lot about performance most of the time, just enough to try to avoid silly stuff.
by alphanumeric0 on 4/27/23, 1:18 AM
I believe the average developer should care somewhat about performance, and depending on their industry they might need to care a lot, but I'm not so convinced for the average case.
The average developer is not working on FAANG-sized codebases. Also, I'd imagine any large systems built up over the years that are refactored would likely see great performance gains. That's just the nature of long-term software.
by li4ick on 4/27/23, 2:45 PM
by theptip on 4/27/23, 3:12 PM
My general principle: don’t give uni-directional advice when optimizing a u-shaped loss function.
I usually find that those failing to advocate the nuanced position “you can spend too much AND too little time on perf, here is how to prioritize” are not adding useful information to the conversation.
The truth is, most startups don’t need to worry much about perf. It’s a feature that your customers don’t usually ask for at first. At the other end of the scale, giant companies invest huge sums in taming performance. And your own situation will have more parameters than just that one simplified spectrum.
Measure ROI honestly, and prioritize accordingly!
by klodolph on 4/27/23, 5:20 AM
by tmtvl on 4/26/23, 10:19 PM
by ftxbro on 4/27/23, 6:09 AM
by cratermoon on 4/27/23, 2:22 PM
Here's the author's five points, and how at least one of the examples he gives proves the reasons.
No need. These companies operate on the leading edge of hardware performance, on purpose. They can't just go out and buy faster hardware, it doesn't exist. Google even builds their own, just to optimize for their uses.
Too small. Again, at the scale of Facebook or Netflix, a 5% performance gain translates to an enormous advantage, which leads directly to the next point.
Not worth it. Here again, we're talking about saving millions of dollars but only because the systems are so enormous.
Niche. Facebook, Twitter, Netflix, and Uber's performance needs are a niche of their own.
Hostpot. Here we can get to a specific example the author quotes. "Cutting back on cookies required a few engineering tricks but was pretty straightforward; over six months we reduced the average cookie bytes per request by 42% (before gzip). To reduce HTML and CSS, our engineers developed a new library of reusable components (built on top of XHP) that would form the building blocks of all our pages."
So their Facebook does have a hotspot, it just happens that it's a very large spot on a colossal size system.
Finally, the author says, "If you look at readily-available, easy to interpret evidence, you can see that they are completely invalid excuses, and cannot possibly be good reasons to shut down an argument about performance."
I'm still looking for the evidence.
by devjab on 4/27/23, 5:57 AM
I recently “inherited” a couple of back-end services when a developer left our company. It turned out that the code was terrible and that they haven’t used, any, or our helper tools. Since we use Typescript everywhere ignoring our quite opinionated and a little fascist linter rules is almost impossible, but the developer in question had the authority to turn it off, which they did and in doing so shot themselves completely in the foot. The back-end services were developed in JavaScript more than Typescript and since both our linters and usual teat pipelines were disabled, and since it’s software that has been developed over almost a years worth of changes, it was just horrible. We’re talking loops comparing values, that were probably there once but are now just sorting things as undefined === undefined kinds of terrible.
The performance was also atrocious. Basically what the service did was gather info on a couple of thousand projects and link them with tens of thousands documents in Sharepoint, but because it was build wrong, it wasn’t pulling the correct documents and it was taking 5-10 minutes each run time. It’s now running at around 10 seconds for its complete run time. Which is a massive performance improvement, and it’ll be even better once I finish building the cashing. So you might think I’m inclined to agree with the article, but I didn’t rewrite it because of its poor performance, I rewrote it because it didn’t work correctly and the performance gains were simply a happy “coincidence”.
This is because the performance didn’t really matter. Yes, it was costing us at most $77 a over our 3 year Azure contract, but the time I spent rebuilding it cost the company almost exactly $1500. Those $1500 were well spent because it wasn’t working, but would they have been well spent in terms of performance? Not really. That being said, it wouldn’t take a lot of those services to become expensive, so it’s not like the author is really wrong either. It’s just that I’m confused with whom he is arguing.
by db48x on 4/26/23, 9:05 PM
Wow. You know you have made bad choices when you need a desperation play like that.
by typon on 4/26/23, 9:45 PM
by phendrenad2 on 4/27/23, 5:31 AM
I feel like pointing this out has become table stakes in any conversation anywhere. Thinking of printing it out on a card and carrying it around.
by globalreset on 4/27/23, 4:47 AM
Yes, it would be great if all developers could write performant code, but let's face it - there's only so many hours in a day and days in a week. Developers already struggle to keep up with all the required knowledge. It's not that people don't want to be competent. We're building more and more complex things while expanding the number of people employed building software which means average skill level is probably slightly decreasing.
by Animats on 4/27/23, 6:38 AM
by denieus on 4/27/23, 10:44 AM
Two examples:
1. I've seen people mentioning that following good programming practices make the code slower, and by removing them you can have improvements around 40%. That sounds like a great number, until you realize the real bottleneck are other things (e.g. database queries, network latency, etc). When you calculate the overall improvement for the request, the gains are negligible.
2. There are some frameworks that market themselves as crazy fast: "If you use us your app will boot almost instantaneously!". Looks cool, until you realize that a good pipeline will gradually rollout a new version and this will take time. Usually it comes with monitoring the new version for a while and then after it's deemed healthy we switch versions completely. Now instead of waiting a few minutes + 10 seconds, you will wait only a few minutes, which doesn't make much difference.
Performance gains will come with tradeoffs and, before committing to that, it's a good idea to evaluate what are the real benefits of doing the changes we're planning to do.
by yen223 on 4/27/23, 6:12 AM
by DeathArrow on 4/27/23, 10:18 AM
by hoseja on 4/27/23, 7:35 AM
by tomgp on 4/27/23, 6:43 AM
by paul_grisham on 4/27/23, 1:12 AM
by naught00 on 4/26/23, 8:28 PM
by leoncaet on 4/26/23, 8:09 PM
by joseph_grobbles on 4/27/23, 1:26 PM
In reality their project was death by a thousand...neigh million or billions...of cuts. Poor technology choices. Poor algorithm choices. Incompetent usage (e.g. terrible LINQ usage everywhere, constantly). This was the sort of project where profiling was almost impossible because any profiling tool barfed up and gave up at every tier.
Profiling the database was an exercise in futility. Profiling the middle tier was a flame graph that was endless peaks. Profiling the front-end literally crashed the browser. I ended up having to modify Chromium source to be able to accurately get a bead on how disastrously the Angular app was built.
This is common. If performance doesn't matter to a team, it will never be something that can be easily fixed. Maybe you can throw a huge amount of money at the problem and scale up and out to a ridiculous degree for a tiny user base, but making an inefficient platform efficient is seldom easy.
by draw_down on 4/27/23, 2:42 PM
by xupybd on 4/27/23, 6:17 AM
Yes at Facebook with end user facing software it is crucial to get performance right. If you're running payroll at a 3 person company it doesn't matter if the software is inefficient. Most of the time it's Excel, and that is not the most efficient way to do those calculations. But it's not worth investing in a better solution until processing payroll is a problem.