by mellosouls on 6/10/25, 8:24 AM with 259 comments
by norir on 6/12/25, 5:13 PM
Rust can likely never be rearchitected without causing a disastrous schism in the community, so it seems probable that compilation will always be slow.
by jplusequalt on 6/12/25, 4:53 PM
by kristoff_it on 6/12/25, 6:06 PM
This is an unfortunate hyperbole from the author. There's a lot of distance between DoD and "hand-rolled assembly" and thinking that it's fair to put them in the same bucket to justify the argument of maintainability is just going to hurt the Rust project's ability to make a better compiler for its users.
You know what helps a lot making software maintainable? A Faster development loop. Zig has invested years into this and both users and the core team itself have started enjoying the fruits of that labor.
https://ziglang.org/devlog/2025/#2025-06-08
Of course everybody is free to choose their own priorities, but I find the reasoning flawed and I think that it would ultimately be in the Rust project's best interest to prioritize compiler performance more.
by juliangmp on 6/12/25, 11:59 PM
Yeah pretty much. C++ is a lot worse when you consider the practical time spent vs compilation benchmarks. In most C++ projects I've seen/worked on, there were one or sometimes more code generators in the toolchain which slowed things down a lot.
And it looks even more dire when you want to add clang-tidy in the mix. It can take like 5 solid minutes to lint even small projects.
When I work in Rust, the overall speed of the toolchain (and the language server) is an absolute blessing!
by adrian17 on 6/12/25, 5:45 PM
I think the cause of the public perception issue could be the variant of Wirth's law: the size of an average codebase (and its dependencies) might be growing faster than the compiler's improvements in compiling it?
by jadbox on 6/12/25, 5:33 PM
by littlestymaar on 6/12/25, 5:53 PM
What's painful is compiling from scratch, and particularly the fact that every other week I need to run cargo clean and do a full rebuild to get things working. IMHO this is a much bigger annoyance than raw compiler speed.
by ruuda on 6/13/25, 6:07 AM
I started writing a post about this many years ago, but never finished it. I took a few slow-changing projects of mine that had a pinned Rust compiler, and then updated both the compiler and dependencies to the latest versions. Invariably, everything got slower to compile, even though the compiler update in isolation made things faster!
by kjuulh on 6/13/25, 9:27 AM
Incremental builds doesn't disrupt my feedback loop much, only when paired with building for multiple targets at once. I.e. Leptos where a wasm and native build is run. Incremental builds do however, eat up a lot of space, a comical amount even. I had a 28GB target/ folder yesterday from working a few hours on a leptos app.
One recommendation is to definitely upgrade your CI workers, Rust definitely benefits from larger workers than the default GitHub actions runners as an example.
Compilling a fairly simple app, though including DuckDB which needs to be compiled, took 28 minutes on default runners. but on a 32x machine, we're down to around 3 minutes. Which is fast enough that it doesn't disrupt our feedback loop.
by kunley on 6/12/25, 11:22 PM
The slowness comes mainly from LLVM.
by Animats on 6/12/25, 6:18 PM
by jmyeet on 6/12/25, 5:13 PM
It is a weird hill to die on for C/C++ devs though, given header files and templates creating massive compile-time issues that really can't be solved.
Google is known for having infrastructure for compiling large projects. They use Blaze (open-sourced at Bazel) to define hermetic builds then use large systems to cache object graphs (for compilation units) and caching compiled objects because Google uses some significant monoliths that would take a significant amount of time to compile from scratch.
I wonder what this kind of infrastructure can do for a large Rust project.
[1]:https://www.pingcap.com/blog/rust-compilation-model-calamity...
by vlovich123 on 6/12/25, 4:27 PM
I wonder if the JVM as an initial target might be interesting given how mature and robust their JIT is.
by dboreham on 6/12/25, 3:59 PM
by lrvick on 6/13/25, 10:52 AM
by superkuh on 6/13/25, 12:05 PM
by daxfohl on 6/12/25, 5:17 PM
by bnolsen on 6/13/25, 12:22 AM
by johnfn on 6/12/25, 8:26 PM
I'm probably being ungrateful here, but here goes anyway. Yes, Rust cares about performance of the compiler, but it would likely be more accurate to say that compiler performance is, like, 15th on the list of things they care about, and they'll happily trade off slower compile times for one of the other things.
I find posts about Rust like this one, where they say "ah, of course we care about perf, look, we got the compile times on a somewhat nontrivial project to go from 1m15s to 1m09s" somewhat underwhelming - I think they miss the point. For me, I basically only care if compile times are virtually instantaneous. e.g. Vite scales to a million lines and can hot-swap my code changes in instantaneously. This is where the productivity benefits come in.
Don't just trust me on it. Remember this post[1]?
> "I feels like some people realize how much more polish could their games have if their compile times were 0.5s instead of 30s. Things like GUI are inherently tweak-y, and anyone but users of godot-rust are going to be at the mercy of restarting their game multiple times in order to make things look good. "
[1]: https://loglog.games/blog/leaving-rust-gamedev/#compile-time...
by synthos on 6/12/25, 8:17 PM
by panstromek on 6/12/25, 5:09 PM
Why doesn't Rust care more about compiler performance?
by baalimago on 6/13/25, 9:25 AM
by baalimago on 6/13/25, 9:11 AM
If there's no tangible solution to this design flaw today, what will happen to it in 20 years? My expectation is that the amount of dependencies will increase, as will the complexity of the Rust ecosystem at large, which will make the compilation times even worse.
by ModernMech on 6/13/25, 2:12 PM
I'm not sure why but the way I would explain it is when you're debugging in an interactive REPL you're always get fast incremental result, but you may be going down an unproductive rabbit hole and spinning your tires. When I hit that compile button, I'm able to take a step back and maybe see the problem from another angle. Still, I prefer a short development loop, but I do think you lose something from it.
by jtrueb on 6/12/25, 5:30 PM
> when I started contributing to Rust back in 2021, my primary interest was compiler performance. So I started doing some optimization work. Then I noticed that the compiler benchmark suite could use some maintenance, so I started working on that. Then I noticed that we don’t compile the compiler itself with as many optimizations as we could, so I started working on adding support for LTO/PGO/BOLT, which further led to improving our CI infrastructure. Then I noticed that we wait quite a long time for our CI workflows, and started optimizing them. Then I started running the Rust Annual Survey, then our GSoC program, then improving our bots, then…