Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Aren't they slower or about as slow as C++, which is notorious for being frustratingly slow, especially for local, non-distributed builds?

Yes. Significantly slower. The last rust crate I pulled [0] took as long to build as the unreal engine project I work on.

[0] https://github.com/getsentry/symbolicator/



Ehhhhhhhh. Are you talking full build or incremental? How long did it take?

Clean and rebuild of Unreal Engine on my 32-core Threadripper takes about 15 minutes. And incremental change to a CPP takes… varies but probably on the order of 30 seconds. Their live coding feature is super slick.

I just cloned, downloaded dependencies, and fully built Symbolicator in 3 minutes 15 seconds. A quick incremental change and build tool 45 seconds.

My impression is the Rust time was all spent linking. Some big company desperately needs to spend the time to port Mold linker to Windows. Supposedly Microsoft is working on a faster linker. But I think it’d be better to just port Mold.


My 32 core threadripper builds ue5 in 12 minutes on Windows. Single file changes on our game are usually under 10 seconds due to unity builds, and a good precompiled header.

My first clone on symbolicator took the same length of time on my windows machine. Even with your numbers, 4 minutes to build what is not a particularly large project is bonkers. I


Sounds like we’re on basically the same page.

My experience across a wide range of C++ projects and wide range of Rust projects is that they’re roughly comparable in terms of compilation speed. Rust macros can do bad things very quickly. Same as C++ templates.

Meanwhile I have some CUDA targets that take over 5 minutes to compile a single file.

I feel like if Rust got a super fast incremental linker it’d be in a pretty decent place.


Is using a 32-core CPU as a baseline for speed comparisons some kind of satire in this thread?


No, threadrippers[0] are commonly used by people working in large compiled codebases. They're expensive, sure, but not incomparable to a top of the range Macbook Pro.

[0] https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadrip...


> No, threadrippers[0] are commonly used by people working in large compiled codebases.

I don't think that's true at all, unless you're using a very personal definition of "common".

In the real world, teams use compiler cache systems like ccache and distributed compilers like distcc to share the load through cheap clusters of COTS hardware or even vCPUs. But even that isn't "common".

Once you add CICD pipelines, you recognize that your claim doesn't hold water.


I know, I have one and it cost the company about 3x as much as a 16" MacBook Pro. An expense that's very unaffordable for most companies, not to mention most developers.

(Even most MBPs are unaffordable for a large set of developers.)

I don't think it's as accessible to average C++ or Rust developers as you expect.


My 3970x workstation, all in, cost about £3200.

I've also got a 14" Macbook Pro (personal machine) that was a _little_ cheaper - it was £2700.

> An expense that's very unaffordable for most companies I think it's unaffordable for some companies, but not most. If your company is paying you $60k, they can afford $3500 once every 5 years on hardware.

> I don't think it's as accessible to average C++ or Rust developers as you expect.

I never said they were accessible, just that they are widespread (as is clear from the people in this thread who have the same hardware as I do).

FWIW, I was involved in choosing the hardware for our team. We initially went with Threadrippers for engineers, but we found that in practice, a 5950x (we now use 7950x's) is _slightly_ slower for full rebuilds but _much_ faster for incremental builds which we do most of.


It’s definitely not a baseline. It’s simply what I have infront of me.

Lenovo P620 is a somewhat common machine for large studios doing Unreal development. And it just so happens that, apparently, lots of people in this thread all work somewhere that provides one.

I don’t think the story changes much for more affordable hardware.


I kind of does, given that the C and C++ culture depends heavily on binary libs (hence the usual ABI drama), in more affordable hardware building everything from source, versus using binary libraries makes a huge difference, thus C++ builds end up being quite fast unless they abuse templates (without extern template on libs).


The project itself may not be large, but if it's pulling in 517 dependencies, that's a lot to compile.

Rust also does significantly more for you at compile time than C++ does, so I don't mind the compiler taking some not time to do it


> The project itself may not be large, but if it's pulling in 517 dependencies, that's a lot to compile.

Why do you need to build hundreds of dependencies if you're not touching them?


Why do you assume they're not touching them?


Only if not taking metaprogramming and constexpr into account.


FYI Unity builds are often slower than incremental builds with proper caching.


What can I use to cache with MSVC that isn't Incredibuild? (a per-core license model isn't suitable - we'd spend more on incredibuild licenses every year than we do on hardware)

Also, I've spent a _lot_ of time with Unreal and the build system. Unreal uses an "adaptive unity" that pulls changed files out of what's compiled every time. Our incremental single file builds are sub-10-seconds most of the time.


> What can I use to cache with MSVC that isn't Incredibuild?

Ccache works, but if you use the Visual Studio C++ compiler you need to configure your build to be cacheable.

https://github.com/ccache/ccache/wiki/MS-Visual-Studio


Lack of Precompiled Header support kills this for us immediately. (We also currently use the unsupported method of debug info generation which we could change). A local build cache is no better than UnrealBuildTool's detection though.


> Lack of Precompiled Header support kills this for us immediately.

Out of curiosity, why do you use precompiled headers? I mean,the standard usecase is to improve build times, and a compiler cache already does that and leads to greater gains. Are you using precompiled headers for some other usecase?


Improved build times is _the_ reason.

> and a compiler cache already does that and leads to greater gains

Can you back that claim up? I've not benchmarked it (and I'm not making a claim either way, you are), but a build cache isn't going to be faster than an incremental build with ninja (for example), and I can use precompiled headers for our common headers to further speed up my incrementals.

You did encourage me to go back and look at sccache though, who have fixed the issues I've reported with MSVC and I'm going to give it a try this week


Mold is ELF only. They'd have to rewrite it anyway. I don't see much point in a Windows port


It’s tentatively planned for 3.0. https://github.com/bluewhalesystems/sold/issues/8

The file formats are indeed totally different. But the operation of linking is the same at a high-level.


Huh, interesting


If they ever plan supporting UNIXes like Aix, mainframes or micros, ELF isn't there as well.


Additionally, in Windows, when Rust is compiling the Microsoft's linker allays launch the M$ telemetry vctip.exe, that stablish an internet connection [Here is an icon of someone in sad thought].

If anyone knows a method for to avoid such launch (besides connection blocking after launch by firewall ), share it please.


I was curious about a couple numbers. Looks like symbolicator has about 20k lines of rust code and 517 crates it depends on (directly and indirectly).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: