by U1F984 on 1/21/22, 9:10 AM with 73 comments
by pron on 1/22/22, 3:02 PM
Reevaluating a low-level programming language is something that's done in a large organisation or project once every 15-25 years or so. Switching such a programming language incurs a high cost and a high risk, and is a long-term commitment. To make such a switch, the new language obviously has to be better, but that's not enough. It has to be a hell of a lot better (and, if not, at least return the investment with a profit quickly).
For some, Rust is better enough. For me, not nearly so. Even though it offers a fascinating and, I think, ingenious path to better safety, it shares some of C++'s greatest downside for me, which is that they are both extremely complex languages. Maybe Rust is simpler, but not enough. Rust also shares what I think is C++'s original misguided sin, which is the attempt to create a low level language, whose code appears high-level on the page by means of a lot of implicitness. I've become convinced that that's a very, very bad idea.
If there were no other ideas on the horizon or if Rust seemed like a surefire success, it might have been justified to make such a switch, but that's not the case. Rust's low adoption rate in professional settings is not reassuring to "PL-cautious" people like me, and a language like Zig shows that there are other approaches that appeal to me more, and while more revolutionary and ambitious than Rust in its departure from C++'s philosophy, I think it also has the potential to be better enough. Maybe it will make it, and maybe it inspires some other language that will, or maybe other ideas will turn up. Given the risk and commitment, to me it makes sense to wait. I don't like C++; I believe Rust is better. But that's not enough.
by lordnacho on 1/22/22, 2:15 PM
The problem in c++ is the surface where you might cause a memory problem is huge. Once it's there, it's a lot of work to test the hypotheses about where it is hiding. On top of that, these kinds of issues can escape your instrumentation in a way that other bugs tend not to. Add some debug lines, things get accessed differently -> Heisenbug. Mega pain in the ass to figure out, lots of time taking everything apart, sprinkling debug lines, running long tests to catch that one time it goes wrong in a million, and so on.
He's also right that the array access thing is not a huge thing, it can't possibly be what your decision turns on, and that most of the code doesn't have a tradeoff in performance, because it's in the config stage rather than the hot path.
Personally I've had a great time with Rust, it's far more productive than other typed languages I've used. On a business level, the issue with the type of bug mentioned above is it destroys your schedule. I've spent entire weeks looking at that kind of thing, when I was expecting to be moving on with other parts of my project. With my current Rust stuff, I'm doing what I expect to be doing: addressing some issue that will soon be fixed like adjusting some component to fit a new spec.
by saagarjha on 1/22/22, 3:23 PM
But, it really doesn’t take a very long post to talk about this. The remainder goes off the rails, talking about “C++ apologists” (hint: if you’re being “fair”, pick words that are unlikely to cause people to be preemptively upset. This is not one of those words) and their stupid opinions. And the author just trashes them as being complete idiots, but it’s obvious that the arguments come from inexperience or strawmen, which just makes the overall thing not particularly convincing. Saying that the various UB finding tools were useless because you tried using them and didn’t get good results is stupid. Being smug about “people who use modern C++ clearly can’t do HFT, which is the thing that you said you were using C++ to do” is also insipid, just because you spotted the use of a shared_ptr somewhere and read how it’s not zero-cost. Modern C++ has other things in it, you know, many of which are zero-cost and significantly (but not entirely) safer; picking one thing and misrepresenting it does not make for a good refutation.
Anyways, coming from someone who writes a lot of C++ and would also like a lot of code to be migrated to Rust for good reasons, it’s a good idea to approach the tradeoffs honestly and without disdain for those who aren’t convinced yet. The core argument I mentioned above and the closing part of the article does do this…but there’s a lot in the middle that doesn’t, and it drags down the usefulness of the post.
by sundarurfriend on 1/22/22, 3:06 PM
> But wait! The C++ apologists are still talking! What are they saying? How have they not been completely flummoxed?
is just one small sample. I came out of the article liking Rust a little bit less than when I went in (irrational, I know, but true).
The quote from The Big Lebowski comes to mind: you're not wrong, author, ...
by pyjarrett on 1/22/22, 3:25 PM
One unaddressed issue in Rust is that this could easily happen with a crate, and how hard diagnosing it could be, especially due to implicit behavior via procedural and attribute macros. Also, just because your code is safe, there could still be an unsafe block at the end of any long safe call chain. I haven't been able to reconcile to myself how this isn't just an illusion of overall safety.
by the8472 on 1/22/22, 2:00 PM
by snicker7 on 1/22/22, 2:02 PM
by maxwell86 on 1/22/22, 3:42 PM
The author spent 1 page before this statement, and the whole article after it, explaining that this is not true, so the article is a big contradiction.
Rust and C++ are not "in the exact same place".
With Rust, you get bound checking by default. If, after profiling, you find that it is a performance problem somewhere, it allows you to elide it safely. In the programs I work on, 99% of the execution time of my program is spent in 1% of the code, and Rust optimizes for this situation. Instead of debugging segmentation faults due to performance optimizations that buy you nothing in 99% of the code, you can spend your time optimizing the 1% that actually makes a difference.
This is why Rust libraries are program are "so fast". Its not because of multi threading, or because rust programmers are geniuses, but rather because Rust buys these programmer time to actually optimize the code that matters, and in particular, do so without introducing new bugs.
by mlindner on 1/21/22, 7:40 PM
by fulafel on 1/22/22, 2:24 PM
by pizlonator on 1/22/22, 4:32 PM
I think this article ignores some arguments for array bounds checks and it ignores the importance of what the default is:
- It doesn’t matter how fast or slow bounds checking is in theory. It only matters how fast it is in practice. In practice, the results are quite surprising. For example, years ago WebKit switched its Vector<> to checking bounds by default with no perf regression, though this did mean having to opt out a handful of the thousands of Vector<> instantiations. Maybe this isn’t true for everyone’s code, but the point is, you should try out bounds checking and see if it really costs you anything rather than worrying about hypothetical nanoseconds.
- If you spend X hours optimizing a program, it will on average get Y% faster. If you don’t have bounds checks in your program and your program has any kind of security story, then you will spend Z hours per year fixing security critical OOBs. I believe that if you switch to checking bounds then you will instead get Z hours/year of your life back. If you then spend those hours optimizing, then for most code, it’ll take less then a year to gain back whatever perf you lost to bounds checks by doing other kinds of optimizations. Hence, bounds checking is a kind of meta performance optimization because it lets you shift some resources away from security to optimization. Since the time you gain for optimization is a recurring win and the bounds checks are a one time cost, the bounds checks become perf-profitable over time.
- It really matters what the language does by default. C++ doesn’t check bounds by default. The most fundamental way of indexing arrays in C++ is via pointers and those don’t do any checks today. The most canonical way of accessing arrays in Rust is with a bounds check. So, I think Rust does encourage programmers to use bounds checking in a way that C++ doesn’t, and that was the right choice.
As a C++ apologist my main beef is: if bounds checks are so great then please give them to me in the language that a crapton of code is already written in rather than giving me a goofy new language with a different syntax and other shit I don’t want (like ownership and an anemic concurrency story).
by robalni on 1/22/22, 4:24 PM
I don't think "safe" or "unsafe" can be a property of code; it can only be a property of something you do, like changing code. I think that something being "unsafe" means that there is a risk with doing it. Programming is always a risk, even if you write Rust code without using the "unsafe" keyword. You can even have arbitrary code execution bugs in Rust programs without using the "unsafe" keyword; think about bugs like SQL-injections.
All of this doesn't mean than I don't think the checks that the Rust compiler does help. They probably help many people to write less buggy code. I just think it makes no sense to call code "safe" or "unsafe".
by RedPanda250 on 1/22/22, 11:29 PM
This is interesting. Where can I read more about this ?
by SiebenHeaven on 1/22/22, 2:24 PM
by habibur on 1/22/22, 2:33 PM
Would like to add that at least in plain C, doing var[index] doesn't invoke any checked() or unchecked() access function call. It's rather compiled into assembly instructions to calculate the address where the data is expected and load it into memory in one or two lines.