by cyber1 on 6/6/20, 6:12 AM with 165 comments
by nickm12 on 6/6/20, 9:23 AM
"It's hard but I love it. Dealing with the compiler felt like being the novice in an old kung fu movie who spends day after day being tortured by his new master (rustc) for no apparent reason until one day it clicks and he realizes that he knows kung fu."
by atoav on 6/6/20, 8:10 AM
Even if Rust would be wiped off the face of the earth tomorrow, thr things I learned from it have definitly made me a much better programmer.
by ncmncm on 6/6/20, 8:34 AM
Probably more people pick up C++ for the first time, in any given week, than the total who use Rust in production today.
Rust also benefits from the limited historical baggage that comes with being new and incompatible. Unlike Java, which was in the same position, Rust adopted very few old mistakes, and especially unlike Java made few new ones. But as the language approaches industrial maturity (possibly within 10 years) early mistakes will become evident, and cruft will be seen to accumulate.
Rust designers have consciously chosen to keep the language's abstraction capacity limited, which makes it more approachable, but reduces what is possible to express in libraries. Libraries possible, even easy in C++ cannot be coded in Rust. The language will adopt more new, powerful features as it matures, losing some of its approachability and coherence. But Rust has already passed a key milestone: there is little risk, anymore, that it could "jump the shark".
The language is gunning for C++'s seat. Whether it becomes a viable alternative, industrially, is purely a numbers game: can it pick up users and uses fast enough? The libraries being coded in C++ today will never be callable from Rust.
Go proved that the world will make room for a less capable language (in Go's case, than Java) if it is simpler. Rust is much more capable than C, Go, or Java, and the world would certainly be a better place if everybody coding those switched to Rust. So, my prediction is that Rust and C++ will coexist for decades. The most ambitious work will continue to be done in C++, but a growing number will have their first industrial coding experience in Rust instead of C, and many will find no reason to graduate to C++.
by FlyingSnake on 6/6/20, 8:32 AM
by Skunkleton on 6/6/20, 8:18 AM
by smabie on 6/6/20, 8:24 AM
What does this mean?
by cassepipe on 6/6/20, 12:24 PM
by turbinerneiter on 6/6/20, 9:39 AM
by devit on 6/6/20, 10:47 AM
Also as long as you accept not having dependent types (at least for the short and mid-term) and several currently unimplemented features, Rust is the optimal way a programming language can be designed other than assorted minor warts.
by orthoxerox on 6/6/20, 10:51 AM
I wouldn't write a LoB application in Rust, for example. But if I wrote programs with really tight speed and memory requirements for a living, I would pick Rust for the task.
If people were forced to write their website backends in Rust (or even their frontends in Rust targeting WASM) they would hate it. Its performance is overkill for 99.9% of backends, but the means of getting this performance kill your productivity.
by robotmay on 6/6/20, 10:54 AM
My favourite metaphor for Rust is that it's like a friendly bare-knuckle fist-fight with the compiler. It's not as user-friendly as, say, Elm, but it's streets ahead of Haskell's errors.
by kumarvvr on 6/6/20, 1:35 PM
As a seasoned C#, Python and JS programmer, what conceptual foundations in CS will make me use rust more effectively?
Say I want to create a new database service, on top of Postgresql, using rust. Would the design of rust help me in a specific way?
I want to learn and use rust, for systems programming, the kind where I build a high performance underlying system, called by other languages, but it always feels I need to learn quite a bit of theory to effectively use rust.
I never felt the same with C# or python. A bit of OO stuff was usually enough to be productive with them.
by lbj on 6/6/20, 11:28 AM
by andi999 on 6/6/20, 10:40 AM
by moonchild on 6/6/20, 8:55 AM
For starters, ATS[1] and f-star[2] both provide much stronger safety guarantees, so if you want the strongest possible guarantees that your low-level code is correct, you can't stop at rust.
_____________________________________________
Beyond that, it's helpful to look at the bigger picture of what characteristics a program needs to have, and what characteristics a language can have to help facilitate that. I propose that there are broadly three program characteristics that are affected by a language's ownership/lifetime system: throughput, resource use, and ease of use/correctness. That is: how long does the code take to run, how much memory does it use, and how likely is it to do the right thing / how much work does it take to massage your code to be accepted by the compiler. This last is admittedly rather nebulous. It depends quite a lot on an individual's experience with a given language, as well as overall experience and attention to detail. Even leaving aside specific language experience, different individuals may rank different languages differently, simply due to different approaches and thinking styles. So I hope you will forgive my speaking a little bit generally and loosely about the topic of ease-of-use/correctness.The primary resource that programs need to manage is memory[3]. We have several strategies for managing memory:
(Note: implicit/explicit below refers to whether something something is an explicit part of the type system, not an explicit part of user code.)
- implicitly managed global heap, as with malloc/free in c
- implicit stack-based raii with automatically freed memory, as in c++, or c with alloca (note: though this is not usually a general-purpose solution, it can be[4]. But more interestingly, it can be composed with other strategies.)
- explicitly managed single-owner abstraction over the global heap and possible the stack, as in rust
- explicit automatic reference counting as an abstraction over the global heap and possibly the stack, as in swift
- implicit memory pools/regions
- explicit automatic tracing garbage collector as an abstraction over the global heap, possibly the stack, possibly memory regions (as in a nursery gc), possible a compactor (as in a compacting gc). (Java)
- custom allocators, which may have arbitrarily complicated designs, be arbitrarily composed, arbitrarily explicit, etc. Not possible to enumerate them all here.
I mentioned before there are three attributes relevant to a memory management scheme. But there is a separate axis along which we have to consider each one: worst case vs average case. A tracing GC will usually have higher throughput than an automatic reference counter, but the automatic reference counter will usually have very consistent performance. On the other hand, an automatic reference counter is usually implemented on top of something like malloc. Garbage collectors generally need a bigger heap than malloc, but malloc has a pathological fragmentation problem which a compacting garbage collector is able to avoid.
This comment is getting very long already, and comparing all of the above systems would be out of scope. But I'll make a few specific observations and field further arguments as they come:
- Because of the fragmentation problem mentioned above, memory pools and special-purpose allocators will always outperform a malloc-based system both in resource usage and throughput (memory management is constant-time + better cache coherency)
- Additionally, implicitly managed memory pools are usually easier to use than an implicitly managed global heap, because you don't have to think about the lifetime of each individual object.
- Implicit malloc/free in c should generally perform similarly to an explicit single-owner system like rust's, because most of the allocation time is spent in malloc, and they have little (or no) runtime performance hit on top of that. The implicit system may have a slight edge because it has more flexible data structures; then again, the explicit single-owner system may have a slight edge because it has more opportunity to allocate locally defined objects directly on the stack if their ownership is not given away. But these are marginal gains either way.
- Naïve reference counting will involve a significant performance hit compared to any of the above systems. However, there is a heavy caveat. Consider what happens if you take your single-owner verified code, remove all the lifetime annotations, and give it to a reference-counting compiler. Assuming it has access to all your source code (which is a reasonable assumption; the single-owner compiler has that), then if it performs even basic optimizations—this isn't a sufficiently smart compiler[5]-type case—it will elide all the reference counting overhead. Granted, most reference-counted code isn't written like this, but it means that reference counting isn't a performance dead end, and it's not difficult to squeeze your rc code to remove some of the rc overhead if you have to.
- It's possible to have shared mutable references, but forbid sharing them across threads.
- The flexibility gains from having shared mutable references are not trivial, and can significantly improve ease of use.
- Correctness improvements from strictly defined lifetimes are a myth. Lifetimes aren't an inherent part of any algorithm, they're an artifact of the fact that computers have limited memory and need to reuse it.
To summarize:
- When maximum performance is needed, pools or special-purpose allocators will always beat single-owner systems.
- For all other cases, the performance cap on reference counting is identical with single-owner systems, while the flexibility cap is much higher.
_____________________________________________
1. http://www.ats-lang.org/3. File handles and mutex locks also come up, but those require different strategies. Happy to talk about those too, but tl;dr file handles should be avoided where possible and refcounted where not; mutexes should also be avoided where possible, and be scoped where not.
by dirtydroog on 6/6/20, 9:24 AM