by SuperV1234 on 9/26/22, 5:01 AM with 108 comments
by _gabe_ on 9/26/22, 4:30 PM
I wondered what code I had written that caused such a massive performance hit in debug mode. So I went to profile my code in a debug build to find out. Lo and behold, something like 90% of the CPU time was wasted doing lookups from std::unordered_map because of debug iterators. I tried everything I could to just turn debug iterators off, and eventually gave up and switched to robin_hood::unordered_map which runs great in debug and release. Now I have an unwarranted aversion to the std lib, even though it really is great so long as you're running it in release mode.
by Deganta on 9/26/22, 2:25 PM
1. Compile time
2. Non-optimized builds
Stuffing everything into the Standard Library using lots of template magic does has its upside, but debug performance is one of the big downsides.
Another one are really clunky interfaces, like std::variant. This should really have been a language feature, not a library features.
by PaulDavisThe1st on 9/26/22, 6:40 PM
Presumably my code has been less than optimally efficient or something, because it seems that most people talking about "modern" C++ view it as absolutely central to the language.
by glandium on 9/26/22, 9:21 AM
IMO, the problem is not so much the inlining, but the sad state of the optimization passes losing track of many things and producing useless debug info.
by BenFrantzDale on 9/26/22, 10:22 AM
by humanrebar on 9/26/22, 1:30 PM
by alternatetwo on 9/26/22, 9:46 AM
[1]: https://learn.microsoft.com/en-us/cpp/build/reference/ob-inl...
by staticassertion on 9/26/22, 5:44 PM
by andrewmcwatters on 9/26/22, 9:27 PM
And BOTH pale in comparison to newer language standard libraries like Go's.
by phaedrus on 9/29/22, 4:21 PM
In my day job we sometimes we need to reproduce the build of a firmware ROM or executable, sometimes decades after the engineer who last built it left. Getting a match is easier (or even possible) for older tech only because of the relative unsophistication of the compilers used - even then it's only reproducible "by accident," and not because the compiler vendor made any guarantees about X language construct reliably produces Y machine code and data layout.
But we need that! For getting accurate baselines. For security verification. And there's no reason in principle we should have to forego updating compilers, IDEs, and OS environments in the indirect hope of not disturbing anything. Those are two separate things: if we had a through-line of higher level language construct -> semantically defined transformation (irregardless of optimization settings) -> machine code, vendors could continue to update their IDEs and compilers while just making sure they still respect the invariants.
C++'s so-called zero-cost abstractions are poor substitute for this: header (library) writers and C++ gurus write code as if they worked like this "guaranteed output transform" I describe, but no compiler actually has to respect it (nevermind that the fine details of what the transformation actually is isn't nailed down) and it differs between Debug and Release build which is particularly bad for game development as TFA makes clear.
by 323 on 9/26/22, 1:52 PM
by halayli on 9/26/22, 12:52 PM
by olliej on 9/26/22, 7:41 PM
Wanting to keep things in the standard library rather than the language itself means that you have to compile large amounts of template hell for a wide array of basic things which hurts build time even in debug modes, and then as this article says you end up with no-op operations that become calls. You also can’t easily debug any of this because you end up with absurd layers of template nonsense for what is again basic functionality.
You can compile with -O1, but that then inlines things that aren’t part of the standard library that makes debugging of the actually relevant code annoying (this is what the debug llvm and clang debug builds do), as it inlines your own code and also means that you lose variables all over the place.
by bluGill on 9/26/22, 2:13 PM
by chubot on 9/26/22, 6:42 PM
I prefer to avoid all the operator overloading and smart pointers, etc.
by StellarScience on 9/26/22, 3:19 PM
These days I rarely build a whole MSVC project as "Debug" without optimization. Instead I enable both optimization and debug information, so I always get call stacks and line-by-line stepping ability. When I end up debugging some file where the optimizations inhibit debuggability, I'll recompile just that compilation unit or library without optimization or inlining.
That said, I agree with the gist of the article that both "zero cost abstraction" and debuggability need to be constantly improved.
by not2b on 9/26/22, 7:40 PM