Compiler performance must be considered up front in language design. It is nearly impossible to fix once the language reaches a certain size without it being a priority. I recently saw here the observation that one can often get a 2x performance improvement through optimization, but 10x requires redesigning the architecture.
Rust can likely never be rearchitected without causing a disastrous schism in the community, so it seems probable that compilation will always be slow.
pjmlp1 day ago
Not only language.
Many of complaints towards Rust, or C++, are in reality tooling complaints.
As shown on other ecosystems, the availability of interpreters or image based tooling are great ways to overcome slow optimizating compilers.
C++ already had a go at this back in the early 90's with Energize C++ and Visual Age for C++ v4, both based on Common Lisp and Smalltalk from their respective owners.
They failed on the market due to the hardware requirements for 90's budgets.
Now slowly coming back with tooling like Visual C++ hot reload improvements, debugging optimised builds, Live++, Jupiter notebooks.
Rational Software started their business selling Ada Machines, the same development experience as Lisp Machines, but with Ada, lovely inspired on Xerox PARC experience with Mesa and Mesa/Cedar.
Haskell and OCaml, besides the slow compilers, have bytecode interpreters and REPLs.
D has the super fast dms, with ldc and gdc, for the optimised builds suffering from longer compile times.
So while Rust cannot be archited in a different way, there is certainly plenty of room for interpreters, REPLs, not compiling always from source and many other tooling improvements, within the same language.
hinkley18 hours ago
I had a coworker who was using Rational back then, and found out one of its killer features was caching of pre compiled headers. Whoever changed them had to pay the piper of compilation, but everyone else got a copy shipped to them over the local network.
NaN years ago
undefined
kibwen1 day ago
It's certainly possible to think of language features that would preclude trivially-achievable high-performance compilation. None of those language features that are present in Rust (specifically, monomorphized generics) would have ever been considered for omission, regardless of their compile-time cost, because that would have compromised Rust's other goals.
panstromek1 day ago
There are many more mundane examples of language design choices in rust that are problematic for compile time. Polymorphization (which has big potential to speed up compile time) has been blocked on pretty obscure problems with TypeId. Procedural macros require double parsing. Ability to define items in function bodies prevents skipping parsing bodies. Those things are not essential, they could pretty easily be tweaked to be less problematic for compile time without compromising anything.
NaN years ago
undefined
NaN years ago
undefined
Someone1 day ago
> would have ever been considered for omission, regardless of their compile-time cost, because that would have compromised Rust's other goals.
That basically says compiler speed isn’t a goal at all for Rust. I think that’s not completely true, but yes, speed of generated code definitely ranks very high for rust.
In contrast, Wirth definitely had the speed at which the Oberon compiler compiled code as a goal (often quoted as that he only added compiler optimizations if they made the compiler itself so much faster that it didn’t become slower because of the added complexity, but I’m not sure he was that strict)
“It is hardly surprising that certain measures for code improvement may yield considerable gains with modest effort, whereas others may require large increases in compiler complexity and size while yielding only moderate code improvements, simply because they apply in rare cases only.
Indeed, there are tremendous differences in the ratio of effort to gain. Before the compiler designer decides to incorporate sophisticated optimization facilities, or before deciding to purchase a highly optimizing, slow and expensive compiler, it is worth while clarifying this ratio, and whether the promised improvements are truly needed.
Furthermore, we must distinguish between optimizations whose effects could also be obtained by a more appropriate formulation of the source program, and those where this is impossible.
The first kind of optimization mainly serves the untalented or sloppy programmer, but merely burdens all the other users through the increased size and decreased speed of the compiler.
As an extreme example, consider the case of a compiler which eliminates a multiplication if one factor has the value 1. The situation is completely different for the computation of the address of an array element, where the index must be multiplied by the size of the elements. Here, the case of a size equal to 1 is frequent, and the multiplication cannot be eliminated by a clever trick in the source program.”
NaN years ago
undefined
swsieber1 day ago
What about crates as the unit of compilation? I am genuinely curious because it's not clear to me what trade-offs there are around that decision.
NaN years ago
undefined
NaN years ago
undefined
fngjdflmdflg1 day ago
This was a big reason for dart canceling its previous macros attempt (as I understand it). Fast compilation is integral for Flutter development - which accounts for a late percentage of dart usage - so after IIRC more than two years of developing it they still ended up not going through with that iteration of macros because it would make hot reload too slow. That degree of level-headedness and consideration is worthy of respect IMO.
krzat1 day ago
Dart is a meh language but their focus on hot reload single handedly made it worth it's existence.
WhyNotHugo1 day ago
One of the issue why compile times are so awful is that all dependencies must be compiled for each project.
20 different projects use the same dependency? They each need to recompile it.
This is an effect of the language not having a proper ABI for compiling libraries as dynamically loadable modules, which in itself presents many other issues, including making distribution of software a complete nightmare.
kibwen1 day ago
> This is an effect of the language not having a proper ABI for compiling libraries as dynamically loadable modules
No, this is a design decision of Cargo to default to using project-local cached artifacts rather than caching them at the user or system level. You can configure Cargo to do so if you'd like. The reason it doesn't do this by default is because Cargo gives crates great latitude to configure themselves via compile-time flags, and any difference in flags means you get a different compiled artifact anyway. On top of that, there's the question of what `cargo clean` should do when you have a global cache rather than a local one.
NaN years ago
undefined
NaN years ago
undefined
eddd-ddde22 hours ago
Dependencies must compile with the right features enabled. You can't possibly share the 2^n versions of every binary. ABI stability doesn't fix this.
surajrmal21 hours ago
If you use bazel to compile rust, it doesn't suffer from this problem. In fact you can get distributed caching as well.
goodpoint23 hours ago
That's solved with sccache but even with that compilation time is still garbage
Fiahil1 day ago
At some point, the community is also responsible for the demanding expectation of a "not slow" compiler.
What's "slow"? What's "fast"? It depends. It depends on the program, the programmer, his or her hardware, the day of the week, the hour of the day, the season, what he or she had for lunch, ...
It's a never ending quest.
I, for exemple, am perfectly happy with the current benchmark of the rust compiler. I find a x2 improvement absolutly excellent.
muth0244620 hours ago
The key to unlocking a 10x improvement to compilation speeds will like be
multithreading. I vaguely remember that LLVM struggled with this and I am not sure where it stands today. On the frontend side language (not compiler) design will
affect how well things can be parallelized, e.g. forward declatations probably help, mandatory interprocedural anaylyses probably hurt.
Having said that, we are in a bad shape when golang compiling 40kLOC in 2s is
a celebrated achievement.
Assuming this is single threaded on a 2GHz machine, we
2s * 2GHz / 40kLOC = 100k [cycles] / LOC
That seems like a lot of compute and I do not see how this cannot be improved substantially.
Shameless plug: the Cwerg language (http://cwerg.org) is very focussed on compilation speeds.
felipeccastro17 hours ago
It is ironic how “rewrite it in Rust” is the solution to make any program fast, except the Rust compiler.
feelamee12 hours ago
maybe rustc will never be re-architectured (although it has already been rewritten once), but with developing rust standard there will come new Rust implementations. And there is a chance that they will prioritize performance when architecting.
hinkley18 hours ago
If the application works poorly for the developers it will eventually work poorly for everyone.
Being surrounded by suck slowly creeps into the quality of your work.
Computer programming is the only skilled labor I know of where people eschew quality tools and think they won’t output slop by doing so.
littlestymaar1 day ago
You're conflating language design and compiler architecture. It's hard to increment on a compiler to get massive performance improvement, and rearchitecture can help, but you don't necessarily need to change anything to the language itself in that regard.
Roslyn (C#) is the best example of that.
It's a massive endeavor and would need significant fundings to happen though.
dist1ll1 day ago
Language design can have massive impact on compiler architecture. A language with strict define-before-use and DAG modules has the potential to blow every major compiler out of the water in terms of compile times. ASTs, type checking, code generation, optimization passes, IR design, linking can all be significantly impacted by this language design choice.
formerly_proven1 day ago
No, language design decisions absolutely have a massive impact the performance envelope of compilers. Think about things like tokenization rules (Zig is designed such that every line can be tokenized independently, for example), ambiguous grammars (most vexing parse, lexer hack etc.), symbol resolution (e.g. explicit imports as in Python, Java or Rust versus "just dump eeet" imports as in C#, and also things whether symbols can be defined after being referenced) and that's before we get to the really big one: type solving.
NaN years ago
undefined
NaN years ago
undefined
awestroke1 day ago
[flagged]
MeetingsBrowser1 day ago
The original comment is mostly inline with the article.
All the easy local optimizations have been done. Even mostly straightforward compiler wide changes take a team of people multiple years to land.
Re-architecting the rust compiler to be faster is probably not going to happen.
NaN years ago
undefined
jplusequalt1 day ago
Having worked on large scale C++ code-bases and thus used to long compilation times, it surprises me that this is the hill many C++ devs would die on in regards to their dislike of Rust.
maccard1 day ago
I work on large c++ code bases day in day out - think 30 minute compiles on an i9 with 128GB ram and NVMe drives.
Rusts compile times are still ungodly slow. I contributed to a “small to medium” open source project [0] a while back, fixing a few issues that we came across when using it. Given that the project is approximately 3 orders of magnitude smaller than my day to day project, a clean build of a few thousand lines of rust took close to 10 minutes. Incremental changes to the project were still closer to a minute at the time. I’ve never worked on a 5m+ LOC project in rust, but I can only imagine how long it would take.
On the flip side, I also submitted some patches to a golang program of a similar size [1] and it was faster to clone, install dependencies and clean build that project than a single file change to the rust project was.
Thanks for actually including the slow repo in your comment. My results on a Ryzen 5900X:
* Clean debug build: 1m 22s
* Incremental debug build: 13s
* Clean release build: 1m 51s
* Incremental release build: 24s
Incremental builds were done by changing one line in creates/symbolicator/src/cli.rs.
It's not great, but it sounds like your experience was much worse for some reason.
NaN years ago
undefined
NaN years ago
undefined
Fluorescence1 day ago
> clean build of a few thousand lines of rust took close to 10 minutes
That doesn't sound likely. I would expect seconds unless something very odd is happing.
Is the example symbolicator?
I can't build the optional "symbolicator-crash" crate because it's not rust but 300k of C++/C pulled from a git submodule that requires dependencies I am not going to install. Your complaint might literally be about C++!
For the rest of the workspace, 60k of rust builds in 60 seconds
- clean debug build on a 6 year old 3900X (which is under load because I am working)
- time includes fetching 650 deps over a poor network and building them (the real line count of the build is likely 100s of thousands or millions of lines of code)
- subsequent release build took 100s
- I use the mold linker which is advised for faster builds
- modern cpus are so much faster than my machine they might not even take 10s
NaN years ago
undefined
9d1 day ago
Just curious, are you still able to get instant feedback and development conveniences on that 30 minute compile time project, like up to date autocomplete and type hints and real-time errors/warnings while developing before compiling?
NaN years ago
undefined
jason-johnson1 day ago
Can you say what your development environment was like? I was having 15 minute build times for a pretty small system. Everyone talks about how slow Rust compile times are so I thought that's just how it is. Then, by chance, I ended up building from a clean install on my work laptop and it took about 3 minutes from scratch.
My development environment is VS Code running in a Dev container in docker desktop. So after my work laptop was so fast, I made some changes to my Mac docker desktop and suddenly the mac could build the project from scratch in about 2 minutes. Incremental compile was several minutes before, instant now.
NaN years ago
undefined
jplusequalt1 day ago
Yes, but Go is a higher level language than Rust. It feels unfair to compare the two. That's why I brought up C++ (as did the article).
NaN years ago
undefined
hinkley18 hours ago
30 minutes versus 60 is really an hour versus two.
Some coworkers and I noticed a long time ago that once you try to task switch while doing build/test automation steps, it always seems like you remember to come back and check about twice as long as the compile was supposed to take. 7+ turned into 15, 15 into a half hour.
And then one day it hit me that this is just Hofstadter’s Law. You think you have ten minutes so you start a ten minute task and it takes you twenty, or you get in a flow and forget to look until your senses tell you you’re forgetting something.
Cutting 10 minutes off a build really averages 20 minutes in saved time per cycle. Which matters a hell of a lot when you go from 4 to 5 cycles per 8 hour day.
afdbcreid1 day ago
What are incremental compile times with the C++ codebase?
Also, does the line of code you count include dependencies (admitting, dependencies in Rust are a problem, but it's not related to compiler performance)?
NaN years ago
undefined
phkahler1 day ago
>> it surprises me that this is the hill many C++ devs would die on in regards to their dislike of Rust
I believe people will exaggerate their current issue so it sounds like the only thing that matters to them. On another project I've had people say "This is the only thing that keeps me using commercial alternatives" or the only thing holding back wider adoption, or the only thing needed for blah blah blah. Meanwhile I've got my own list of high priority things needed to bring it to what I'd consider a basic level of completeness.
When it comes to performance it will never be good enough for everyone. There is always a bigger project to consume whatever resources are available. There are always people who insist on doing things in odd ways (maybe valid, but very atypical). These requests to improve are often indistinguishable from the regular ones.
pton_xd1 day ago
Makes sense to me! Everyone with enough C++ experience has dealt with that nightmare at one point. Never again, if you can help it.
0cf8612b2e1e1 day ago
It is a quantifiable negative to which you can always point. Of course it will be used for justifications.
logicchains1 day ago
There's a lot of things you can do in C++ to reduce compilation time if you care about it, that aren't possible with Rust.
kibwen1 day ago
You can absolutely do the same things in Rust, it's just that the culture and tooling of Rust encourages much larger compilation units than in C or C++, so you don't get the same sort of best-case nontrivial embarrassing-parallelism, forcing the compiler to do more work to parallelize.
To address the tooling pressure, I would like to see Cargo support first-class internal-only crates, thereby deconflating the crate as what is today both the unit of compilation and the unit of distribution.
MeetingsBrowser1 day ago
There are things you can do for Rust if it really is a deal breaker.
Dioxus has a hot reload system. Some rust game engines have done similar things.
NaN years ago
undefined
bnolsen1 day ago
The answer there was to always write small standalone executable unit test sets and simulation for day to day coding. Avoiding template heavy pigs like QT or boost helps too.
akazantsev22 hours ago
> Avoiding template heavy pigs like QT
Well, you definitely have no experience with Qt.
throwaway66478620 hours ago
C++ is one of the fastest languages to compile*, assuming you aren't doing silly stuff like abusing templates. It just gets a bad rep because actual, real-world, massive projects are written in C++. Like, yeah, no wonder Chromium build times aren't spectacular, but I assure you that they'd be much, much worse if it was written in Rust. Pointing and scoffing at it when there's nothing written in Rust that we can even compare it to is just intellectually dishonest.
* It's not beating interpreted languages any time soon, but that's not really a fair comparison.
kristoff_it1 day ago
> Speaking of DoD, an additional thing to consider is the maintainability of the compiler codebase. Imagine that we swung our magic wand again, and rewrote everything over the night using DoD, SIMD vectorization, hand-rolled assembly, etc. It would (possibly) be way faster, yay! However, we do not only care about immediate performance, but also about our ability to make long-term improvements to it.
This is an unfortunate hyperbole from the author. There's a lot of distance between DoD and "hand-rolled assembly" and thinking that it's fair to put them in the same bucket to justify the argument of maintainability is just going to hurt the Rust project's ability to make a better compiler for its users.
You know what helps a lot making software maintainable? A Faster development loop. Zig has invested years into this and both users and the core team itself have started enjoying the fruits of that labor.
Of course everybody is free to choose their own priorities, but I find the reasoning flawed and I think that it would ultimately be in the Rust project's best interest to prioritize compiler performance more.
Rusky1 day ago
"Hand-rolled assembly" was one item in a list that also included DoD. You're reading way more into that sentence than they wrote- the claim is that DoD itself also impacts the maintainability of the codebase.
deadfa111 day ago
I was working on a zig project recently that uses some complex comptime type construction. I had bumped to the latest dev version from 0.13, and I couldn't believe how much improvement there has been in this area. I am very appreciative of really fast iteration cycles.
90s_dev1 day ago
Yeah but it's Zig. Rust is for when you want to write C but have it be easier. Zig is when you want it to be harder than C, but with more control over execution and allocation as a trade off.
AndyKelley1 day ago
For anyone who wants to form their own opinion about whether this style of programming is easier or harder than it would be in other languages:
>[...] this will depend on who you ask, e.g. some C++ developers don’t mind Rust’s compilation times at all, as they are used to the same (or worse) build times
Yeah pretty much. C++ is a lot worse when you consider the practical time spent vs compilation benchmarks.
In most C++ projects I've seen/worked on, there were one or sometimes more code generators in the toolchain which slowed things down a lot.
And it looks even more dire when you want to add clang-tidy in the mix. It can take like 5 solid minutes to lint even small projects.
When I work in Rust, the overall speed of the toolchain (and the language server) is an absolute blessing!
carlmr1 day ago
>And it looks even more dire when you want to add clang-tidy in the mix. It can take like 5 solid minutes to lint even small projects.
And running all tests with sanitizers, just to get some runtime checks of what Rust excludes at compile time.
I love Rust for the fast compile times.
feelamee12 hours ago
why do you run clang-tidy with compiler? Just use it interactively - with cland. These is much more useful to me
adrian171 day ago
> On this benchmark, the compiler is almost twice as fast than it was three years ago.
I think the cause of the public perception issue could be the variant of Wirth's law: the size of an average codebase (and its dependencies) might be growing faster than the compiler's improvements in compiling it?
IshKebab1 day ago
Yeah definitely when you include dependencies. Also I've noticed that when your dependency tree gets above a certain size you end up pulling in every alternative crate for a certain task, because e.g. one of your dependencies uses miniz_oxide and another uses zlib-rs (or whatever).
On the other hand the compile to for most dependencies doesn't matter hugely because they are easy to do in parallel. It's always the last few crates and linking that take half the time.
jadbox1 day ago
Not related to the article, but after years of using Rust, it still is a pain in the ass. While it may be a good choice for OS development, high frequency trading, medical devices, vehicle firmware, finance software, or working on device drivers, it feels way overkill for most other general domains. On the other hand, I learned Zig and Go both over a weekend and find they run almost as fast and don't suffer from memory issues (as much as say Java or C++).
sfvisser1 day ago
This comment would have been more useful with some qualification of why that’s the case. The language, tooling, library ecosystem? Something else?
skrtskrt1 day ago
For me the hangup is that async is Still Hard. Just a ridiculous amount of internal implementation details exposed in order to just write, like, an http middleware.
We looked at proposing Rust as the second blessed language in addition to Go where I work, and the conclusion was basically... why?
We have skilled Go engineers that can drop down to manual memory management and squeeze lots of extra performance out of it. And it's still dead simple when you don't need to do that or the task is suitable for a junior engineer. And channels are simply one of the best concurrency primitives out there, and baked into the language unlike Rust where everything is library making independent decisions. (to be fair I haven't tried Elixir/Erlang message passing, I understand people like that too).
akazantsev22 hours ago
For Go, it's a design decision. From the start, they strived to make compilation as fast as possible.
Not to be that guy who comes to Rust’s defense whenever Go is mentioned, but... Rust protects from a much larger class of errors than just memory safety. For instance, it is impossible to invalidate an iterator while iterating over it, refer to an unset or invalid value, inadvertently merely shallow copy a variable, or forget to lock/unlock a mutex.
codr71 day ago
If only these were common problems that were difficult to otherwise avoid.
NaN years ago
undefined
NaN years ago
undefined
90s_dev1 day ago
Could you elaborate on the memory issues in all four languages that you ran into?
kjuulh1 day ago
Could Rust be faster, yes. But honestly, for our use-case shipping; tools, services, libraries and what have you in production, it is plenty fast. That said, Rust definitely falls off a cliff once you get to a very large workspace (I'd say plus 100k lines of code it begins to snowball), but you can design yourself out of that, unless you build truly massive apps.
Incremental builds doesn't disrupt my feedback loop much, only when paired with building for multiple targets at once. I.e. Leptos where a wasm and native build is run. Incremental builds do however, eat up a lot of space, a comical amount even. I had a 28GB target/ folder yesterday from working a few hours on a leptos app.
One recommendation is to definitely upgrade your CI workers, Rust definitely benefits from larger workers than the default GitHub actions runners as an example.
Compilling a fairly simple app, though including DuckDB which needs to be compiled, took 28 minutes on default runners. but on a 32x machine, we're down to around 3 minutes. Which is fast enough that it doesn't disrupt our feedback loop.
crohr1 day ago
What kind of CI runners do you use then? Do you self-host?
kjuulh23 hours ago
You can rent bigger runners from github. They're still not as fast as third party ones, but it takes 5 minutes to set up and is still pay as you go. I just see a lot of people use the default ones, which are very small.
ruuda1 day ago
The Rust ecosystem is getting slower faster than the compiler is getting faster. Libraries grow to add features, they add dependencies. Individually the growth is not so bad, and justified by features or wider platform support. But they add up, and especially dependencies adding dependencies act as a multiplier.
I started writing a post about this many years ago, but never finished it. I took a few slow-changing projects of mine that had a pinned Rust compiler, and then updated both the compiler and dependencies to the latest versions. Invariably, everything got slower to compile, even though the compiler update in isolation made things faster!
eddd-ddde22 hours ago
When a platform has good support and is easy to onboard this is the inevitable result. It's just like the JavaScript ecosystem.
But this is not a downside. Just like I can start a new website project and not use a single dependency, I can start a new rust project and not install a single dependency.
To me the real value is in the tools and core language feature. I could probably implement my own minimal ad-hoc async IO framework if I wanted to, and shape it to my needs. No dependencies.
simonask1 day ago
I don't know, these things ebb and flow.
There's a bit of pushback against high-dependency project structures and compile times recently, and even niche crates like `unsynn` have garnered some attention as an alternative to the relatively heavy `syn` crate.
kunley1 day ago
The article is fine and has a lot of good points, but tries to avoid the main issue like a plague. So I will speak it here:
The slowness comes mainly from LLVM.
kobzol1 day ago
For many use-cases yes, but there are crates bottlenecked on different things than the codegen backend.
But I don't think that's the point. We could get rid of LLVM and use other backends, same as we could do other improvements. The point is that there are also other priorities and we don't have enough manpower to make progress faster.
NaN years ago
undefined
panstromek1 day ago
This is somewhat true but also a bit misleading. Lot of the problems comes from how rust interacts with it, and how are rust projects structured. This ultimately shows up as time in LLVM, but LLVM is not entirely responsible for it.
Animats1 day ago
This isn't a huge problem. My big Rust project compiles in about a minute in release mode. Failed compiles with errors only take a few seconds. That's where most of the debugging takes place. Once it compiles, it usually works the first time.
johnfn1 day ago
A minute is pretty bad. I understand it may work for your use case, but there are plenty of use cases out there where errors typically don't fail the compile and a minute iteration time is a deal killer. For instance: UI work - good luck catching an incorrect color with a compile error. Vite can compile 40,000 loc and display it on your screen in probably a couple of milliseconds.
NaN years ago
undefined
NaN years ago
undefined
NaN years ago
undefined
afdbcreid1 day ago
But how big are your big projects?
NaN years ago
undefined
vlovich1231 day ago
I wonder if how much value there is in skipping LLVM in favor of having a JIT optimized linked in instead. For release builds it would get you a reasonable proxy if it optimized decently while still retaining better debugability.
I wonder if the JVM as an initial target might be interesting given how mature and robust their JIT is.
bbatha1 day ago
> I wonder if how much value there is in skipping LLVM in favor of having a JIT optimized linked in instead. For release builds it would get you a reasonable proxy if it optimized decently while still retaining better debugability.
Rust is in the process of building out the cranelift backend. Cranelift was originally built to be a JIT compiler. The hope is that this can become the debug build compiler.
The JVM is not a very meaningful target for Rust since it does not use C-like flat memory addressing and pointer arithmetic. It's as if every single Java object and field is sitting in its own tiny memory segment/address space. On the one hand, this makes it essentially transparent to GC, which is a key property for Java; OTOH, it means that compiling C-like languages to the JVM is usually done by reimplementing "memory" as a JVM array of byte values.
We almost definitely need to build a JIT in the future to avoid this problem.
dhruvrajvanshi1 day ago
I would love this in modern languages.
For dev builds, I see JIT compilation as a better deal than debug builds because it's capable of eventually reaching peak performance. For performance sensitive stuff like games, it really matters to keep a nice feedback loop without making the game unusable by turning off all optimizations.
AOT static binaries are valuable for deployments.
No idea how expensive it would be to develop for an existing language like Rust though.
lrvick1 day ago
If anyone wants to feel better about compile times for their rust programs, try full source bootstrapping the rust compiler itself. Took about 2 days on 64 cores until very recently (thanks to mrustc 0.74). Now only 7 hours!
bnolsen1 day ago
Compilation speed makes go nice. Zig should end up being king here depending on comptime use (ie: lack of operators can be overcome by using comptime to parse formulae strings for things like geometric algebra).
johnfn1 day ago
> First, let me assure you - yes, we (as in, the Rust Project) absolutely do care about the performance of our beloved compiler, and we put in a lot of effort to improve it.
I'm probably being ungrateful here, but here goes anyway. Yes, Rust cares about performance of the compiler, but it would likely be more accurate to say that compiler performance is, like, 15th on the list of things they care about, and they'll happily trade off slower compile times for one of the other things.
I find posts about Rust like this one, where they say "ah, of course we care about perf, look, we got the compile times on a somewhat nontrivial project to go from 1m15s to 1m09s" somewhat underwhelming - I think they miss the point. For me, I basically only care if compile times are virtually instantaneous. e.g. Vite scales to a million lines and can hot-swap my code changes in instantaneously. This is where the productivity benefits come in.
Don't just trust me on it. Remember this post[1]?
> "I feels like some people realize how much more polish could their games have if their compile times were 0.5s instead of 30s. Things like GUI are inherently tweak-y, and anyone but users of godot-rust are going to be at the mercy of restarting their game multiple times in order to make things look good. "
You have a fair point, I agree that while compiler performance is a priority, is is one of many priorities, and not currently super high on the list for many Rust Project developers. I wish it was different, but the only thing we can do is just do the work to make it faster :) Or support the people that work on it.
kalaksi1 day ago
Isn't Vite for javascript though, which is, of course, a scripting language?
Btw, I've used QML and Dioxus with rust (not for games). Both make hot reloading the GUI parts possible without recompiling since that part is basically not rust (Dioxus in a bit more limited manner).
daxfohl1 day ago
Maybe these features already exist, but I'd like a way to: 1) Type check without necessarily building the whole thing. 2) Run a unit test, only building the dependencies of that test. Do these exist or are they remotely feasible?
panstromek1 day ago
cargo check exists for option 1. For 2.) it depends on the project structure. Either way, they don't help as much as you would hope for.
NaN years ago
undefined
NaN years ago
undefined
lpapez1 day ago
You don't even need to ask AI to get an answer to the first question, the first hit on both Google and Bing will tell you how to do it - it takes 2 seconds!
baalimago1 day ago
Why haven't Rust been forked by some bigger company, who have the time and resources to specialize it into something which fits better into a professional market? Yes I'm saying low compilation time -> high development RTT is a requirement for the professional market.
WJW1 day ago
Maybe it has been, but said bigger company hasn't published their work?
I disagree fast compile times are "required" for the professional market btw. They are nice, sure, but there's plenty of professional development out there in languages that are slow to compile.
bob10291 day ago
> bigger company, who have the time and resources to specialize it into something which fits better into a professional market
Welcome to the central thesis for using Microsoft's stack.
If I'm getting paid money based upon the direct outcome of my work (I.e., freelance / consulting / 1099), I am taking zero chances with the tooling. $500 for a perpetual license of VS the cheapest option by miles if you value your time and sanity.
Iteration time is nice, but the debugger experience is the most important thing once you are working on problems people are actually willing to pay money to solve. Just because it's "possible" doesn't mean it is ergonomic or accessible. I don't have to exit my IDE if I want to attach to prod and run a cheeky snippet of LINQ on a collection in break mode to investigate a runtime curiosity.
panstromek1 day ago
The original title is:
Why doesn't Rust care more about compiler performance?
mellosouls1 day ago
(OP) I submitted this some time ago, and am pretty sure I would have submitted the title as is, so I'm guessing some manual or automatic editing since by the mods before the second chance here.
synthos1 day ago
Regarding AVX: could rust be compiled with different symbols that target different x64 instruction sets, then at runtime choose the symbol set that is the more performant for that architecture?
kobzol1 day ago
I'm not sure how that works. You either let the compiler compile your whole program with AVX (which duplicates the binary) or you manually use AVX with runtime detection on selected places (which requires writing manual vectorization).
littlestymaar1 day ago
In my experience working on medium-sized Rust projects (hundreds of thousands of LoCs, not millions), incremental compilation and mold pretty much solved the problem in practice. I still occasionally code on my 13 years old laptop when traveling and compilation time is fine even there (for cargo check and debug build, that is, I barely ever compile in release mode locally).
What's painful is compiling from scratch, and particularly the fact that every other week I need to run cargo clean and do a full rebuild to get things working. IMHO this is a much bigger annoyance than raw compiler speed.
sureglymop1 day ago
Yes but are these the default? I want this to work pleasantly out of the box so we don't scare new users as quickly.
NaN years ago
undefined
baalimago1 day ago
Seems to me that Rust has hit bedrock.
If there's no tangible solution to this design flaw today, what will happen to it in 20 years? My expectation is that the amount of dependencies will increase, as will the complexity of the Rust ecosystem at large, which will make the compilation times even worse.
kobzol1 day ago
I don't think we hit a bedrock. As I wrote, we have a lot of ideas for massive improvements. But we need more people to work on them.
NaN years ago
undefined
zozbot2341 day ago
It's OK; 20 years ought to be enough time to rewrite LLVM in Rust.
dboreham1 day ago
I'd vote for filesystem space utilization to be worked on before performance.
The problems are largely related. Cut down the amount of intermediate compilation artifacts by half and you'll have sped up the compiler substantially. Monomorphization and iterator expansion and such is a significant contributor to both issues.
scripturial1 day ago
One of the reasons I quit rust is literally because having 4-5 projects checked out that use serde would fill my laptop drive with junk in a few weeks.
MeetingsBrowser1 day ago
Why not both?
If I had to choose though, I would choose compilation speed. Buying an SSD to double my storage is much more cost effective than buying a bulkier processor to halve my compilation times.
NaN years ago
undefined
stevedonovan1 day ago
Is this not more a Cargo thing? Cargo is obsessed with correct builds and eventually the file system fills up with old artifacts.
(I know, I have to declare Cargo bankruptcy every few weeks and do a full clean & rebuild)
NaN years ago
undefined
NaN years ago
undefined
jakkos1 day ago
Not that this isn't a problem, it is, target folders currently take up ~100gb on my machine but...
I'd still, by far, prefer a tiny incremental compile speed increase over a substantial storage reduction. I can get a bigger SSD, I can't get my time back :'(
ivanjermakov1 day ago
This is one of the only reasons I disliked Haskell. GHC and lib files can easily take over 2GB of storage.
preciousoo1 day ago
What’s stopping cargo from storing libraries in one global directory(via hash or whatever), to be re-used whenever needed?
NaN years ago
undefined
NaN years ago
undefined
kzrdude1 day ago
Performance has been worked on since Rust 1.0 or so, for all this time, there has been lots of work on compiler performance. There's no "before performance" :)
NaN years ago
undefined
jmyeet1 day ago
I'm a big fan of Rust but there are definitely warts that are going to be difficult to cure [1]. This is 5 years old now but I believe it's still largely relevant.
It is a weird hill to die on for C/C++ devs though, given header files and templates creating massive compile-time issues that really can't be solved.
Google is known for having infrastructure for compiling large projects. They use Blaze (open-sourced at Bazel) to define hermetic builds then use large systems to cache object graphs (for compilation units) and caching compiled objects because Google uses some significant monoliths that would take a significant amount of time to compile from scratch.
I wonder what this kind of infrastructure can do for a large Rust project.
I think there is a massive difference in compile times between idiomatic C and C++, so its problematic to be lumping them together. But there is also some selection bias since large projects tend to migrate from C to C++.
NaN years ago
undefined
superkuh23 hours ago
The biggest problem with the Rust compiler is not it's speed in compiling. It's that rustc from 3 months ago can't compile most Rust code written today. And don't tell me that cargo versioning fixes this, it doesn't. The very improvements we are celebrating here, which are very real and appreciated, are part of this problem. Rust is young and Rust changes very, very fast. I think it'll be a great language in a decade when it's no longer just used by bleeding edge types and has a target that stands still for more than a few months.
jtrueb1 day ago
A true champion
> when I started contributing to Rust back in 2021, my primary interest was compiler performance. So I started doing some optimization work. Then I noticed that the compiler benchmark suite could use some maintenance, so I started working on that. Then I noticed that we don’t compile the compiler itself with as many optimizations as we could, so I started working on adding support for LTO/PGO/BOLT, which further led to improving our CI infrastructure. Then I noticed that we wait quite a long time for our CI workflows, and started optimizing them. Then I started running the Rust Annual Survey, then our GSoC program, then improving our bots, then…
c-cube1 day ago
I'm worried this person is going to experience a Yak overflow, honestly.
NaN years ago
undefined
NaN years ago
undefined
jamesmunns1 day ago
Kobzol is an absolutely wonderful person to work with. I also work in the Rust project, and any time I've interacted with him, he's been great.
agumonkey1 day ago
Talk about proper `continuous improvement`
ModernMech21 hours ago
The biggest thing that's happened in recent time to improve Rust compiler performance was the introduction the Apple M-series chips. On my x86 machine, it'll take maybe 10 minutes for a fresh build of my project, but on my Apple machine that's down to less than a minute, even on the lower end Mac Mini. For incremental builds it only takes a few seconds. I'm fine with this amount of compilation time for what it buys me, and I don't feel it slows me down any because (and I know this sounds like cope) it gives me a minute to breathe and collect my thoughts. Sometimes I find that debugging a problem while actively coding in an interactive REPL is different from debugging offline.
I'm not sure why but the way I would explain it is when you're debugging in an interactive REPL you're always get fast incremental result, but you may be going down an unproductive rabbit hole and spinning your tires. When I hit that compile button, I'm able to take a step back and maybe see the problem from another angle. Still, I prefer a short development loop, but I do think you lose something from it.
Compiler performance must be considered up front in language design. It is nearly impossible to fix once the language reaches a certain size without it being a priority. I recently saw here the observation that one can often get a 2x performance improvement through optimization, but 10x requires redesigning the architecture.
Rust can likely never be rearchitected without causing a disastrous schism in the community, so it seems probable that compilation will always be slow.
Not only language.
Many of complaints towards Rust, or C++, are in reality tooling complaints.
As shown on other ecosystems, the availability of interpreters or image based tooling are great ways to overcome slow optimizating compilers.
C++ already had a go at this back in the early 90's with Energize C++ and Visual Age for C++ v4, both based on Common Lisp and Smalltalk from their respective owners.
They failed on the market due to the hardware requirements for 90's budgets.
Now slowly coming back with tooling like Visual C++ hot reload improvements, debugging optimised builds, Live++, Jupiter notebooks.
Rational Software started their business selling Ada Machines, the same development experience as Lisp Machines, but with Ada, lovely inspired on Xerox PARC experience with Mesa and Mesa/Cedar.
Haskell and OCaml, besides the slow compilers, have bytecode interpreters and REPLs.
D has the super fast dms, with ldc and gdc, for the optimised builds suffering from longer compile times.
So while Rust cannot be archited in a different way, there is certainly plenty of room for interpreters, REPLs, not compiling always from source and many other tooling improvements, within the same language.
I had a coworker who was using Rational back then, and found out one of its killer features was caching of pre compiled headers. Whoever changed them had to pay the piper of compilation, but everyone else got a copy shipped to them over the local network.
undefined
It's certainly possible to think of language features that would preclude trivially-achievable high-performance compilation. None of those language features that are present in Rust (specifically, monomorphized generics) would have ever been considered for omission, regardless of their compile-time cost, because that would have compromised Rust's other goals.
There are many more mundane examples of language design choices in rust that are problematic for compile time. Polymorphization (which has big potential to speed up compile time) has been blocked on pretty obscure problems with TypeId. Procedural macros require double parsing. Ability to define items in function bodies prevents skipping parsing bodies. Those things are not essential, they could pretty easily be tweaked to be less problematic for compile time without compromising anything.
undefined
undefined
> would have ever been considered for omission, regardless of their compile-time cost, because that would have compromised Rust's other goals.
That basically says compiler speed isn’t a goal at all for Rust. I think that’s not completely true, but yes, speed of generated code definitely ranks very high for rust.
In contrast, Wirth definitely had the speed at which the Oberon compiler compiled code as a goal (often quoted as that he only added compiler optimizations if they made the compiler itself so much faster that it didn’t become slower because of the added complexity, but I’m not sure he was that strict)
http://www.projectoberon.net/wirth/CompilerConstruction/Comp..., section 16.1:
“It is hardly surprising that certain measures for code improvement may yield considerable gains with modest effort, whereas others may require large increases in compiler complexity and size while yielding only moderate code improvements, simply because they apply in rare cases only.
Indeed, there are tremendous differences in the ratio of effort to gain. Before the compiler designer decides to incorporate sophisticated optimization facilities, or before deciding to purchase a highly optimizing, slow and expensive compiler, it is worth while clarifying this ratio, and whether the promised improvements are truly needed.
Furthermore, we must distinguish between optimizations whose effects could also be obtained by a more appropriate formulation of the source program, and those where this is impossible.
The first kind of optimization mainly serves the untalented or sloppy programmer, but merely burdens all the other users through the increased size and decreased speed of the compiler.
As an extreme example, consider the case of a compiler which eliminates a multiplication if one factor has the value 1. The situation is completely different for the computation of the address of an array element, where the index must be multiplied by the size of the elements. Here, the case of a size equal to 1 is frequent, and the multiplication cannot be eliminated by a clever trick in the source program.”
undefined
What about crates as the unit of compilation? I am genuinely curious because it's not clear to me what trade-offs there are around that decision.
undefined
undefined
This was a big reason for dart canceling its previous macros attempt (as I understand it). Fast compilation is integral for Flutter development - which accounts for a late percentage of dart usage - so after IIRC more than two years of developing it they still ended up not going through with that iteration of macros because it would make hot reload too slow. That degree of level-headedness and consideration is worthy of respect IMO.
Dart is a meh language but their focus on hot reload single handedly made it worth it's existence.
One of the issue why compile times are so awful is that all dependencies must be compiled for each project.
20 different projects use the same dependency? They each need to recompile it.
This is an effect of the language not having a proper ABI for compiling libraries as dynamically loadable modules, which in itself presents many other issues, including making distribution of software a complete nightmare.
> This is an effect of the language not having a proper ABI for compiling libraries as dynamically loadable modules
No, this is a design decision of Cargo to default to using project-local cached artifacts rather than caching them at the user or system level. You can configure Cargo to do so if you'd like. The reason it doesn't do this by default is because Cargo gives crates great latitude to configure themselves via compile-time flags, and any difference in flags means you get a different compiled artifact anyway. On top of that, there's the question of what `cargo clean` should do when you have a global cache rather than a local one.
undefined
undefined
Dependencies must compile with the right features enabled. You can't possibly share the 2^n versions of every binary. ABI stability doesn't fix this.
If you use bazel to compile rust, it doesn't suffer from this problem. In fact you can get distributed caching as well.
That's solved with sccache but even with that compilation time is still garbage
At some point, the community is also responsible for the demanding expectation of a "not slow" compiler.
What's "slow"? What's "fast"? It depends. It depends on the program, the programmer, his or her hardware, the day of the week, the hour of the day, the season, what he or she had for lunch, ...
It's a never ending quest.
I, for exemple, am perfectly happy with the current benchmark of the rust compiler. I find a x2 improvement absolutly excellent.
The key to unlocking a 10x improvement to compilation speeds will like be multithreading. I vaguely remember that LLVM struggled with this and I am not sure where it stands today. On the frontend side language (not compiler) design will affect how well things can be parallelized, e.g. forward declatations probably help, mandatory interprocedural anaylyses probably hurt.
Having said that, we are in a bad shape when golang compiling 40kLOC in 2s is a celebrated achievement. Assuming this is single threaded on a 2GHz machine, we 2s * 2GHz / 40kLOC = 100k [cycles] / LOC
That seems like a lot of compute and I do not see how this cannot be improved substantially.
Shameless plug: the Cwerg language (http://cwerg.org) is very focussed on compilation speeds.
It is ironic how “rewrite it in Rust” is the solution to make any program fast, except the Rust compiler.
maybe rustc will never be re-architectured (although it has already been rewritten once), but with developing rust standard there will come new Rust implementations. And there is a chance that they will prioritize performance when architecting.
If the application works poorly for the developers it will eventually work poorly for everyone.
Being surrounded by suck slowly creeps into the quality of your work.
Computer programming is the only skilled labor I know of where people eschew quality tools and think they won’t output slop by doing so.
You're conflating language design and compiler architecture. It's hard to increment on a compiler to get massive performance improvement, and rearchitecture can help, but you don't necessarily need to change anything to the language itself in that regard.
Roslyn (C#) is the best example of that.
It's a massive endeavor and would need significant fundings to happen though.
Language design can have massive impact on compiler architecture. A language with strict define-before-use and DAG modules has the potential to blow every major compiler out of the water in terms of compile times. ASTs, type checking, code generation, optimization passes, IR design, linking can all be significantly impacted by this language design choice.
No, language design decisions absolutely have a massive impact the performance envelope of compilers. Think about things like tokenization rules (Zig is designed such that every line can be tokenized independently, for example), ambiguous grammars (most vexing parse, lexer hack etc.), symbol resolution (e.g. explicit imports as in Python, Java or Rust versus "just dump eeet" imports as in C#, and also things whether symbols can be defined after being referenced) and that's before we get to the really big one: type solving.
undefined
undefined
[flagged]
The original comment is mostly inline with the article.
All the easy local optimizations have been done. Even mostly straightforward compiler wide changes take a team of people multiple years to land.
Re-architecting the rust compiler to be faster is probably not going to happen.
undefined
Having worked on large scale C++ code-bases and thus used to long compilation times, it surprises me that this is the hill many C++ devs would die on in regards to their dislike of Rust.
I work on large c++ code bases day in day out - think 30 minute compiles on an i9 with 128GB ram and NVMe drives.
Rusts compile times are still ungodly slow. I contributed to a “small to medium” open source project [0] a while back, fixing a few issues that we came across when using it. Given that the project is approximately 3 orders of magnitude smaller than my day to day project, a clean build of a few thousand lines of rust took close to 10 minutes. Incremental changes to the project were still closer to a minute at the time. I’ve never worked on a 5m+ LOC project in rust, but I can only imagine how long it would take.
On the flip side, I also submitted some patches to a golang program of a similar size [1] and it was faster to clone, install dependencies and clean build that project than a single file change to the rust project was.
[0] https://github.com/getsentry/symbolicator
[1] https://github.com/buildkite/agent
Thanks for actually including the slow repo in your comment. My results on a Ryzen 5900X:
* Clean debug build: 1m 22s
* Incremental debug build: 13s
* Clean release build: 1m 51s
* Incremental release build: 24s
Incremental builds were done by changing one line in creates/symbolicator/src/cli.rs.
It's not great, but it sounds like your experience was much worse for some reason.
undefined
undefined
> clean build of a few thousand lines of rust took close to 10 minutes
That doesn't sound likely. I would expect seconds unless something very odd is happing.
Is the example symbolicator?
I can't build the optional "symbolicator-crash" crate because it's not rust but 300k of C++/C pulled from a git submodule that requires dependencies I am not going to install. Your complaint might literally be about C++!
For the rest of the workspace, 60k of rust builds in 60 seconds
- clean debug build on a 6 year old 3900X (which is under load because I am working)
- time includes fetching 650 deps over a poor network and building them (the real line count of the build is likely 100s of thousands or millions of lines of code)
- subsequent release build took 100s
- I use the mold linker which is advised for faster builds
- modern cpus are so much faster than my machine they might not even take 10s
undefined
Just curious, are you still able to get instant feedback and development conveniences on that 30 minute compile time project, like up to date autocomplete and type hints and real-time errors/warnings while developing before compiling?
undefined
Can you say what your development environment was like? I was having 15 minute build times for a pretty small system. Everyone talks about how slow Rust compile times are so I thought that's just how it is. Then, by chance, I ended up building from a clean install on my work laptop and it took about 3 minutes from scratch.
My development environment is VS Code running in a Dev container in docker desktop. So after my work laptop was so fast, I made some changes to my Mac docker desktop and suddenly the mac could build the project from scratch in about 2 minutes. Incremental compile was several minutes before, instant now.
undefined
Yes, but Go is a higher level language than Rust. It feels unfair to compare the two. That's why I brought up C++ (as did the article).
undefined
30 minutes versus 60 is really an hour versus two.
Some coworkers and I noticed a long time ago that once you try to task switch while doing build/test automation steps, it always seems like you remember to come back and check about twice as long as the compile was supposed to take. 7+ turned into 15, 15 into a half hour.
And then one day it hit me that this is just Hofstadter’s Law. You think you have ten minutes so you start a ten minute task and it takes you twenty, or you get in a flow and forget to look until your senses tell you you’re forgetting something.
Cutting 10 minutes off a build really averages 20 minutes in saved time per cycle. Which matters a hell of a lot when you go from 4 to 5 cycles per 8 hour day.
What are incremental compile times with the C++ codebase?
Also, does the line of code you count include dependencies (admitting, dependencies in Rust are a problem, but it's not related to compiler performance)?
undefined
>> it surprises me that this is the hill many C++ devs would die on in regards to their dislike of Rust
I believe people will exaggerate their current issue so it sounds like the only thing that matters to them. On another project I've had people say "This is the only thing that keeps me using commercial alternatives" or the only thing holding back wider adoption, or the only thing needed for blah blah blah. Meanwhile I've got my own list of high priority things needed to bring it to what I'd consider a basic level of completeness.
When it comes to performance it will never be good enough for everyone. There is always a bigger project to consume whatever resources are available. There are always people who insist on doing things in odd ways (maybe valid, but very atypical). These requests to improve are often indistinguishable from the regular ones.
Makes sense to me! Everyone with enough C++ experience has dealt with that nightmare at one point. Never again, if you can help it.
It is a quantifiable negative to which you can always point. Of course it will be used for justifications.
There's a lot of things you can do in C++ to reduce compilation time if you care about it, that aren't possible with Rust.
You can absolutely do the same things in Rust, it's just that the culture and tooling of Rust encourages much larger compilation units than in C or C++, so you don't get the same sort of best-case nontrivial embarrassing-parallelism, forcing the compiler to do more work to parallelize.
To address the tooling pressure, I would like to see Cargo support first-class internal-only crates, thereby deconflating the crate as what is today both the unit of compilation and the unit of distribution.
There are things you can do for Rust if it really is a deal breaker.
Dioxus has a hot reload system. Some rust game engines have done similar things.
undefined
The answer there was to always write small standalone executable unit test sets and simulation for day to day coding. Avoiding template heavy pigs like QT or boost helps too.
> Avoiding template heavy pigs like QT
Well, you definitely have no experience with Qt.
C++ is one of the fastest languages to compile*, assuming you aren't doing silly stuff like abusing templates. It just gets a bad rep because actual, real-world, massive projects are written in C++. Like, yeah, no wonder Chromium build times aren't spectacular, but I assure you that they'd be much, much worse if it was written in Rust. Pointing and scoffing at it when there's nothing written in Rust that we can even compare it to is just intellectually dishonest.
* It's not beating interpreted languages any time soon, but that's not really a fair comparison.
> Speaking of DoD, an additional thing to consider is the maintainability of the compiler codebase. Imagine that we swung our magic wand again, and rewrote everything over the night using DoD, SIMD vectorization, hand-rolled assembly, etc. It would (possibly) be way faster, yay! However, we do not only care about immediate performance, but also about our ability to make long-term improvements to it.
This is an unfortunate hyperbole from the author. There's a lot of distance between DoD and "hand-rolled assembly" and thinking that it's fair to put them in the same bucket to justify the argument of maintainability is just going to hurt the Rust project's ability to make a better compiler for its users.
You know what helps a lot making software maintainable? A Faster development loop. Zig has invested years into this and both users and the core team itself have started enjoying the fruits of that labor.
https://ziglang.org/devlog/2025/#2025-06-08
Of course everybody is free to choose their own priorities, but I find the reasoning flawed and I think that it would ultimately be in the Rust project's best interest to prioritize compiler performance more.
"Hand-rolled assembly" was one item in a list that also included DoD. You're reading way more into that sentence than they wrote- the claim is that DoD itself also impacts the maintainability of the codebase.
I was working on a zig project recently that uses some complex comptime type construction. I had bumped to the latest dev version from 0.13, and I couldn't believe how much improvement there has been in this area. I am very appreciative of really fast iteration cycles.
Yeah but it's Zig. Rust is for when you want to write C but have it be easier. Zig is when you want it to be harder than C, but with more control over execution and allocation as a trade off.
For anyone who wants to form their own opinion about whether this style of programming is easier or harder than it would be in other languages:
https://github.com/ziglang/zig/blob/0.14.1/lib/std/zig/token...
undefined
I had almost the exact opposite experience.
undefined
>[...] this will depend on who you ask, e.g. some C++ developers don’t mind Rust’s compilation times at all, as they are used to the same (or worse) build times
Yeah pretty much. C++ is a lot worse when you consider the practical time spent vs compilation benchmarks. In most C++ projects I've seen/worked on, there were one or sometimes more code generators in the toolchain which slowed things down a lot.
And it looks even more dire when you want to add clang-tidy in the mix. It can take like 5 solid minutes to lint even small projects.
When I work in Rust, the overall speed of the toolchain (and the language server) is an absolute blessing!
>And it looks even more dire when you want to add clang-tidy in the mix. It can take like 5 solid minutes to lint even small projects.
And running all tests with sanitizers, just to get some runtime checks of what Rust excludes at compile time.
I love Rust for the fast compile times.
why do you run clang-tidy with compiler? Just use it interactively - with cland. These is much more useful to me
> On this benchmark, the compiler is almost twice as fast than it was three years ago.
I think the cause of the public perception issue could be the variant of Wirth's law: the size of an average codebase (and its dependencies) might be growing faster than the compiler's improvements in compiling it?
Yeah definitely when you include dependencies. Also I've noticed that when your dependency tree gets above a certain size you end up pulling in every alternative crate for a certain task, because e.g. one of your dependencies uses miniz_oxide and another uses zlib-rs (or whatever).
On the other hand the compile to for most dependencies doesn't matter hugely because they are easy to do in parallel. It's always the last few crates and linking that take half the time.
Not related to the article, but after years of using Rust, it still is a pain in the ass. While it may be a good choice for OS development, high frequency trading, medical devices, vehicle firmware, finance software, or working on device drivers, it feels way overkill for most other general domains. On the other hand, I learned Zig and Go both over a weekend and find they run almost as fast and don't suffer from memory issues (as much as say Java or C++).
This comment would have been more useful with some qualification of why that’s the case. The language, tooling, library ecosystem? Something else?
For me the hangup is that async is Still Hard. Just a ridiculous amount of internal implementation details exposed in order to just write, like, an http middleware.
We looked at proposing Rust as the second blessed language in addition to Go where I work, and the conclusion was basically... why?
We have skilled Go engineers that can drop down to manual memory management and squeeze lots of extra performance out of it. And it's still dead simple when you don't need to do that or the task is suitable for a junior engineer. And channels are simply one of the best concurrency primitives out there, and baked into the language unlike Rust where everything is library making independent decisions. (to be fair I haven't tried Elixir/Erlang message passing, I understand people like that too).
For Go, it's a design decision. From the start, they strived to make compilation as fast as possible.
https://en.wikipedia.org/wiki/Go_(programming_language)#Desi...
Not to be that guy who comes to Rust’s defense whenever Go is mentioned, but... Rust protects from a much larger class of errors than just memory safety. For instance, it is impossible to invalidate an iterator while iterating over it, refer to an unset or invalid value, inadvertently merely shallow copy a variable, or forget to lock/unlock a mutex.
If only these were common problems that were difficult to otherwise avoid.
undefined
undefined
Could you elaborate on the memory issues in all four languages that you ran into?
Could Rust be faster, yes. But honestly, for our use-case shipping; tools, services, libraries and what have you in production, it is plenty fast. That said, Rust definitely falls off a cliff once you get to a very large workspace (I'd say plus 100k lines of code it begins to snowball), but you can design yourself out of that, unless you build truly massive apps.
Incremental builds doesn't disrupt my feedback loop much, only when paired with building for multiple targets at once. I.e. Leptos where a wasm and native build is run. Incremental builds do however, eat up a lot of space, a comical amount even. I had a 28GB target/ folder yesterday from working a few hours on a leptos app.
One recommendation is to definitely upgrade your CI workers, Rust definitely benefits from larger workers than the default GitHub actions runners as an example.
Compilling a fairly simple app, though including DuckDB which needs to be compiled, took 28 minutes on default runners. but on a 32x machine, we're down to around 3 minutes. Which is fast enough that it doesn't disrupt our feedback loop.
What kind of CI runners do you use then? Do you self-host?
You can rent bigger runners from github. They're still not as fast as third party ones, but it takes 5 minutes to set up and is still pay as you go. I just see a lot of people use the default ones, which are very small.
The Rust ecosystem is getting slower faster than the compiler is getting faster. Libraries grow to add features, they add dependencies. Individually the growth is not so bad, and justified by features or wider platform support. But they add up, and especially dependencies adding dependencies act as a multiplier.
I started writing a post about this many years ago, but never finished it. I took a few slow-changing projects of mine that had a pinned Rust compiler, and then updated both the compiler and dependencies to the latest versions. Invariably, everything got slower to compile, even though the compiler update in isolation made things faster!
When a platform has good support and is easy to onboard this is the inevitable result. It's just like the JavaScript ecosystem.
But this is not a downside. Just like I can start a new website project and not use a single dependency, I can start a new rust project and not install a single dependency.
To me the real value is in the tools and core language feature. I could probably implement my own minimal ad-hoc async IO framework if I wanted to, and shape it to my needs. No dependencies.
I don't know, these things ebb and flow.
There's a bit of pushback against high-dependency project structures and compile times recently, and even niche crates like `unsynn` have garnered some attention as an alternative to the relatively heavy `syn` crate.
The article is fine and has a lot of good points, but tries to avoid the main issue like a plague. So I will speak it here:
The slowness comes mainly from LLVM.
For many use-cases yes, but there are crates bottlenecked on different things than the codegen backend.
But I don't think that's the point. We could get rid of LLVM and use other backends, same as we could do other improvements. The point is that there are also other priorities and we don't have enough manpower to make progress faster.
undefined
This is somewhat true but also a bit misleading. Lot of the problems comes from how rust interacts with it, and how are rust projects structured. This ultimately shows up as time in LLVM, but LLVM is not entirely responsible for it.
This isn't a huge problem. My big Rust project compiles in about a minute in release mode. Failed compiles with errors only take a few seconds. That's where most of the debugging takes place. Once it compiles, it usually works the first time.
A minute is pretty bad. I understand it may work for your use case, but there are plenty of use cases out there where errors typically don't fail the compile and a minute iteration time is a deal killer. For instance: UI work - good luck catching an incorrect color with a compile error. Vite can compile 40,000 loc and display it on your screen in probably a couple of milliseconds.
undefined
undefined
undefined
But how big are your big projects?
undefined
I wonder if how much value there is in skipping LLVM in favor of having a JIT optimized linked in instead. For release builds it would get you a reasonable proxy if it optimized decently while still retaining better debugability.
I wonder if the JVM as an initial target might be interesting given how mature and robust their JIT is.
> I wonder if how much value there is in skipping LLVM in favor of having a JIT optimized linked in instead. For release builds it would get you a reasonable proxy if it optimized decently while still retaining better debugability.
Rust is in the process of building out the cranelift backend. Cranelift was originally built to be a JIT compiler. The hope is that this can become the debug build compiler.
https://github.com/rust-lang/rustc_codegen_cranelift
undefined
The JVM is not a very meaningful target for Rust since it does not use C-like flat memory addressing and pointer arithmetic. It's as if every single Java object and field is sitting in its own tiny memory segment/address space. On the one hand, this makes it essentially transparent to GC, which is a key property for Java; OTOH, it means that compiling C-like languages to the JVM is usually done by reimplementing "memory" as a JVM array of byte values.
LLVM optimizations are the overwhelming majority of the compilation bottleneck for us over at Feldera. We blogged about some of the challenges we faced here: https://www.feldera.com/blog/cutting-down-rust-compile-times...
We almost definitely need to build a JIT in the future to avoid this problem.
I would love this in modern languages.
For dev builds, I see JIT compilation as a better deal than debug builds because it's capable of eventually reaching peak performance. For performance sensitive stuff like games, it really matters to keep a nice feedback loop without making the game unusable by turning off all optimizations.
AOT static binaries are valuable for deployments.
No idea how expensive it would be to develop for an existing language like Rust though.
If anyone wants to feel better about compile times for their rust programs, try full source bootstrapping the rust compiler itself. Took about 2 days on 64 cores until very recently (thanks to mrustc 0.74). Now only 7 hours!
Compilation speed makes go nice. Zig should end up being king here depending on comptime use (ie: lack of operators can be overcome by using comptime to parse formulae strings for things like geometric algebra).
> First, let me assure you - yes, we (as in, the Rust Project) absolutely do care about the performance of our beloved compiler, and we put in a lot of effort to improve it.
I'm probably being ungrateful here, but here goes anyway. Yes, Rust cares about performance of the compiler, but it would likely be more accurate to say that compiler performance is, like, 15th on the list of things they care about, and they'll happily trade off slower compile times for one of the other things.
I find posts about Rust like this one, where they say "ah, of course we care about perf, look, we got the compile times on a somewhat nontrivial project to go from 1m15s to 1m09s" somewhat underwhelming - I think they miss the point. For me, I basically only care if compile times are virtually instantaneous. e.g. Vite scales to a million lines and can hot-swap my code changes in instantaneously. This is where the productivity benefits come in.
Don't just trust me on it. Remember this post[1]?
> "I feels like some people realize how much more polish could their games have if their compile times were 0.5s instead of 30s. Things like GUI are inherently tweak-y, and anyone but users of godot-rust are going to be at the mercy of restarting their game multiple times in order to make things look good. "
[1]: https://loglog.games/blog/leaving-rust-gamedev/#compile-time...
You have a fair point, I agree that while compiler performance is a priority, is is one of many priorities, and not currently super high on the list for many Rust Project developers. I wish it was different, but the only thing we can do is just do the work to make it faster :) Or support the people that work on it.
Isn't Vite for javascript though, which is, of course, a scripting language?
Btw, I've used QML and Dioxus with rust (not for games). Both make hot reloading the GUI parts possible without recompiling since that part is basically not rust (Dioxus in a bit more limited manner).
Maybe these features already exist, but I'd like a way to: 1) Type check without necessarily building the whole thing. 2) Run a unit test, only building the dependencies of that test. Do these exist or are they remotely feasible?
cargo check exists for option 1. For 2.) it depends on the project structure. Either way, they don't help as much as you would hope for.
undefined
undefined
You don't even need to ask AI to get an answer to the first question, the first hit on both Google and Bing will tell you how to do it - it takes 2 seconds!
Why haven't Rust been forked by some bigger company, who have the time and resources to specialize it into something which fits better into a professional market? Yes I'm saying low compilation time -> high development RTT is a requirement for the professional market.
Maybe it has been, but said bigger company hasn't published their work?
I disagree fast compile times are "required" for the professional market btw. They are nice, sure, but there's plenty of professional development out there in languages that are slow to compile.
> bigger company, who have the time and resources to specialize it into something which fits better into a professional market
Welcome to the central thesis for using Microsoft's stack.
If I'm getting paid money based upon the direct outcome of my work (I.e., freelance / consulting / 1099), I am taking zero chances with the tooling. $500 for a perpetual license of VS the cheapest option by miles if you value your time and sanity.
Iteration time is nice, but the debugger experience is the most important thing once you are working on problems people are actually willing to pay money to solve. Just because it's "possible" doesn't mean it is ergonomic or accessible. I don't have to exit my IDE if I want to attach to prod and run a cheeky snippet of LINQ on a collection in break mode to investigate a runtime curiosity.
The original title is:
Why doesn't Rust care more about compiler performance?
(OP) I submitted this some time ago, and am pretty sure I would have submitted the title as is, so I'm guessing some manual or automatic editing since by the mods before the second chance here.
Regarding AVX: could rust be compiled with different symbols that target different x64 instruction sets, then at runtime choose the symbol set that is the more performant for that architecture?
I'm not sure how that works. You either let the compiler compile your whole program with AVX (which duplicates the binary) or you manually use AVX with runtime detection on selected places (which requires writing manual vectorization).
In my experience working on medium-sized Rust projects (hundreds of thousands of LoCs, not millions), incremental compilation and mold pretty much solved the problem in practice. I still occasionally code on my 13 years old laptop when traveling and compilation time is fine even there (for cargo check and debug build, that is, I barely ever compile in release mode locally).
What's painful is compiling from scratch, and particularly the fact that every other week I need to run cargo clean and do a full rebuild to get things working. IMHO this is a much bigger annoyance than raw compiler speed.
Yes but are these the default? I want this to work pleasantly out of the box so we don't scare new users as quickly.
undefined
Seems to me that Rust has hit bedrock.
If there's no tangible solution to this design flaw today, what will happen to it in 20 years? My expectation is that the amount of dependencies will increase, as will the complexity of the Rust ecosystem at large, which will make the compilation times even worse.
I don't think we hit a bedrock. As I wrote, we have a lot of ideas for massive improvements. But we need more people to work on them.
undefined
It's OK; 20 years ought to be enough time to rewrite LLVM in Rust.
I'd vote for filesystem space utilization to be worked on before performance.
The previous post on the OP's blog is about exactly this: https://kobzol.github.io/rust/rustc/2025/06/02/reduce-cargo-...
The problems are largely related. Cut down the amount of intermediate compilation artifacts by half and you'll have sped up the compiler substantially. Monomorphization and iterator expansion and such is a significant contributor to both issues.
One of the reasons I quit rust is literally because having 4-5 projects checked out that use serde would fill my laptop drive with junk in a few weeks.
Why not both?
If I had to choose though, I would choose compilation speed. Buying an SSD to double my storage is much more cost effective than buying a bulkier processor to halve my compilation times.
undefined
Is this not more a Cargo thing? Cargo is obsessed with correct builds and eventually the file system fills up with old artifacts.
(I know, I have to declare Cargo bankruptcy every few weeks and do a full clean & rebuild)
undefined
undefined
Not that this isn't a problem, it is, target folders currently take up ~100gb on my machine but...
I'd still, by far, prefer a tiny incremental compile speed increase over a substantial storage reduction. I can get a bigger SSD, I can't get my time back :'(
This is one of the only reasons I disliked Haskell. GHC and lib files can easily take over 2GB of storage.
What’s stopping cargo from storing libraries in one global directory(via hash or whatever), to be re-used whenever needed?
undefined
undefined
Performance has been worked on since Rust 1.0 or so, for all this time, there has been lots of work on compiler performance. There's no "before performance" :)
undefined
I'm a big fan of Rust but there are definitely warts that are going to be difficult to cure [1]. This is 5 years old now but I believe it's still largely relevant.
It is a weird hill to die on for C/C++ devs though, given header files and templates creating massive compile-time issues that really can't be solved.
Google is known for having infrastructure for compiling large projects. They use Blaze (open-sourced at Bazel) to define hermetic builds then use large systems to cache object graphs (for compilation units) and caching compiled objects because Google uses some significant monoliths that would take a significant amount of time to compile from scratch.
I wonder what this kind of infrastructure can do for a large Rust project.
[1]:https://www.pingcap.com/blog/rust-compilation-model-calamity...
I think there is a massive difference in compile times between idiomatic C and C++, so its problematic to be lumping them together. But there is also some selection bias since large projects tend to migrate from C to C++.
undefined
The biggest problem with the Rust compiler is not it's speed in compiling. It's that rustc from 3 months ago can't compile most Rust code written today. And don't tell me that cargo versioning fixes this, it doesn't. The very improvements we are celebrating here, which are very real and appreciated, are part of this problem. Rust is young and Rust changes very, very fast. I think it'll be a great language in a decade when it's no longer just used by bleeding edge types and has a target that stands still for more than a few months.
A true champion
> when I started contributing to Rust back in 2021, my primary interest was compiler performance. So I started doing some optimization work. Then I noticed that the compiler benchmark suite could use some maintenance, so I started working on that. Then I noticed that we don’t compile the compiler itself with as many optimizations as we could, so I started working on adding support for LTO/PGO/BOLT, which further led to improving our CI infrastructure. Then I noticed that we wait quite a long time for our CI workflows, and started optimizing them. Then I started running the Rust Annual Survey, then our GSoC program, then improving our bots, then…
I'm worried this person is going to experience a Yak overflow, honestly.
undefined
undefined
Kobzol is an absolutely wonderful person to work with. I also work in the Rust project, and any time I've interacted with him, he's been great.
Talk about proper `continuous improvement`
The biggest thing that's happened in recent time to improve Rust compiler performance was the introduction the Apple M-series chips. On my x86 machine, it'll take maybe 10 minutes for a fresh build of my project, but on my Apple machine that's down to less than a minute, even on the lower end Mac Mini. For incremental builds it only takes a few seconds. I'm fine with this amount of compilation time for what it buys me, and I don't feel it slows me down any because (and I know this sounds like cope) it gives me a minute to breathe and collect my thoughts. Sometimes I find that debugging a problem while actively coding in an interactive REPL is different from debugging offline.
I'm not sure why but the way I would explain it is when you're debugging in an interactive REPL you're always get fast incremental result, but you may be going down an unproductive rabbit hole and spinning your tires. When I hit that compile button, I'm able to take a step back and maybe see the problem from another angle. Still, I prefer a short development loop, but I do think you lose something from it.