by sneeeeeed on 10/7/21, 5:09 AM with 58 comments
by lkey on 10/7/21, 5:33 AM
by rackjack on 10/7/21, 5:35 AM
by habibur on 10/7/21, 5:47 AM
These are the three things that matter most.
by shmerl on 10/7/21, 5:51 AM
by uncomputation on 10/7/21, 5:42 AM
by einpoklum on 10/7/21, 7:27 AM
To the point though, while some of the points in the article are well-taken, especially how one can pick-and-choose scenarios where one language or the other has a noticeable implementation issue or design misfeature - the article is still problematic for at least two reasons:
1. Mixing up a comparison of the languages and of the results of specific compilations by a specific compiler.
If C++ semantics allow doing something easier, but Rust semantics do not, e.g. because of avoiding undefined behavior, then regardless of whether a specific compiler makes use of this fact - it is still a benefit, speed-wise. Specifically, in Rust, if I take two non-negative numbers and multiple them using the square() function discussed in the article, then check the result for being negative - with C++, the check is redundant and can be dropped in favor of a true value during compilation; with Rust, it cannot. Now, will some version of LLVM do that? I don't know - but it could (and it should).
2. Misrepresenting how the langue affects the behavior of the compiler back-end.
The article says:
> Rust is based on LLVM, which is the same back end that Clang is based on. Therefore, Rust has inherited "for free" and shares with C++ most of the language-independent code transformations and optimizations.
That's simply not true. That is, LLVM is _capable_ of most of the same transformations and optimizations; but the question of which of them it is _allowed_ to use depends on various kinds of semantic information specific to the programming language.
This is apparent even in trivial examples. Here's one:
https://godbolt.org/z/rMW3vsEvo
where the same function is compiled in Rust:
pub fn check(num: i32) -> bool {
let y = if num < 0 { -num } else { num };
return y >= 0;
}
and in C++: auto check(int32_t num) {
auto y = (num < 0) ? -num : num;
return y >= 0;
}
The compilation results are: example::check:
mov eax, edi
neg eax
cmovl eax, edi
test eax, eax
setns al
ret
and: check(int): # @check(int)
mov al, 1
ret
respectively. How come? Isn't it all LLVM under the hood? It's even the same LLVM, version, 12.0.1 for both languages... Now, I'm not sure why exactly this happens since I'm no Rust expert. But - LLVM is not _allowed_ to apply the same optimizations to the Rust function as to the C++ function, so the results are different.by AnimaLibera on 10/7/21, 6:11 AM
by zekrioca on 10/7/21, 5:33 AM
> Spoiler: C++ is not faster or slower – that's not the point, actually. This article continues our good tradition of busting myths about the Rust language shared by some big-name Russian companies.
by zgs on 10/7/21, 5:27 AM
by adrian_b on 10/7/21, 5:56 AM
The information provided in the article shows beyond any reasonable doubt that both C++ and Rust are exactly equally safe regarding integer overflow.
Rust checks for overflow in debug builds, but it does not check for overflow in release builds.
The same happens in C++. All decent C++ compilers have options for overflow checking, but most developers do not use these options in release builds, for fear that it would affect the performance.
Rust is a little better because it enforces the overflow checking option on debug builds, but C++ is better because the developer may choose to keep the option also in release builds.
So this myth is not busted, it is confirmed.