- It is not fun but frustrating to work in Rust, and contrary to C, you are limited by the language/compiler on what you can do.
- building/compiling the kernel is not trivial, and you will add a new huge dependency that you have to deal with to build the kernel for whatever target.
Let's suppose you want to build for MIPS, then you need to have Rust supporting MIPS.
As an exemple, there is a common package in python that decided to start having their module in Rust instead of C. Now a lot of users of the module are pissed off, with good reason, because you can't build/install the module anymore in a little bit older or non conventional distributions.
Deeply subjective. Rust has been the most loved language on Stack Overflow for 5 years in a row now.
> add a new huge dependency
Sure, but setting up Rust is much much easier than GCC with all the trimmings.
> As an exemple, there is a common package in python that decided to start having their module in Rust instead of C. Now a lot of users of the module are pissed off, with good reason, because you can't build/install the module anymore in a little bit older or non conventional distributions.
Assuming you are referring to pyca, you are mistaken and there has been a lot of misinformation about the change. Rust support is needed to build the module, but not install it. Pyca works just fine for users without rust and works everywhere rust does. Users on niche CPU architectures which haven't been sold commercially for 15+ years were the only ones impacted.
The problem with pyca cryptography was that Python users are not in the habit of using lockfiles which meant reinstalling venvs picked up more recent versions of transient dependencies. That and that they made the change in a minor update and non wheel users got caught out.
That people weren’t version-pinning critical dependencies was the most eye-opening thing about that whole affair. The tools to make this easy have been available and well-used for years, don’t have a lot of sympathy for them.
Well people think they are pinning their critical dependencies by using a requirements.txt file. But it normally the transient dependencies are not listed. And anytime you rebuild a
You probably know this but for people reading along who think using requirements.txt is the same thing: it is not.
How lockfiles work is that you define your dependencies in a file like pyptoject.toml or Pipfile (similar to a Cargo.toml). You then use pipenv or poetry or pants to compute all the dependent versions of your dependencies and transient dependencies. Then that's saved in a lockfile. Any time you need to remake a venv for local Dev or rebuild a docker container or install deps for CI is uses the same locked versions from the lockfile. Only when you decide to recompute the dependencies do the transient dependencies change in the lockfile.
Sadly, a standard lockfile was rejected from PEP-650, held back by pip being woeful:
> Additionally, pip would not be able to guarantee recreating the same environment (install the exact same dependencies) as it is outside the scope of its functionality.
Well then, maybe fix it? Because clearly it’s an issue? A good chunk of that explanation really reads like “ehhhh, can’t really be bothered fixing this”, which makes sense given the Python devs approach to the last couple of Python versions: no fixes for anything important, just more half-baked features nobody asked for.
>Python versions: no fixes for anything important, just more half-baked features nobody asked for.
Oh god, tell me about it! 'Hey guise I heard pattern matching in rust and Scala and Haskell is popular! Let's add it to python but with no compile time checks to make sure matches are exaustive!'
Some excellent and smart devs who I really do respect worked really hard to deliver a complete dog shit feature while pip languishes for almost a year with a broken version resolver [1]. It's so frustrating. :( :( :(
> Users on niche CPU architectures which haven't been sold commercially for 15+ years were the only ones impacted.
This is kind of the root of the problem, when you are kind of a hobbyist dev wanted to work with the latest shinny new version of everything, then everything is fine.
But when you try to do different things that are not mainstream or doing embedded, then you understand why sometimes you have to keep old software or hardware.
Rust like go are made for a connected world, where you are always upstream, always connected to get the latest versions, and it's ok to do breaking change to the language every few years.
Did you ever try to build a Linux from scratch? Trying to fit some build or runtime constraints? Then you learn the real cost of each dependency that is added and of their complexity.
Imagine you had that much dependencies to build a kernel, probably this much of memory, CPU and storage used. Now you need to add rust, its dependencies, other tools related to it. Each dependency possibly not supporting your system or configuration or require their own library dependencies in a new version that is maybe conflicting with older versions already used by the system. And that you can't change without breaking other existing programs in the system. And maybe you can fix these other applications to support the new version of the library but you just wanted to "update" your kernel and not spend 10 days reworking your system unexpectedly because a stupid update required it.
It’s worth pointing out that a lot of those “weird” old chips weren’t formally supported anyway, and the the fact that the software worked on them was a convenient fluke, and by design or intention.
This is the magic of Linux so far to be so versatile and able to support so many hardwares.
But that being said, most of the time, issues are not coming from really exotic "chips" but from little variations, configuration or system library versions.
Let's say you want to cross compile from x to y, and want code to use that specific memory space. This is when things start to get messy usually.
Example of how you can lose a lot of time and get crazy, when you just wanted to compile the code of something for your case:
That particular metric is derived from "people who use the language outside of work and wish they could use it more at work." The survey doesn't explicitly say "which language do you love?".
Maybe the Python number is smaller because those people are working right? All the Rust programmers are just working on a fun program at home, and Python is a serious language now, you're working the 9-5 with Joe Coder?
Haskell is 51.7% though. So, I guess when I wasn't looking Haskell really exploded in boring office environments and I shouldn't expect to see any more hobbyist Haskell projects everywhere... or your hypothesis was just wrong.
Maybe it's just that Rust is new and exciting, let's look at what people want to use that they don't now, surely that'll be Rust too and we'll know it's just Hype.
Huh, that chart is dominated by Python. 30% of programmers not doing Python wanted to start.
> why does Rust report 86.1% love while Python only got 66.7% ?
I mean, because python is in practice a boring office language where the immense majority of devs have to maintain Joe Coder's 2009 Django set of custom attributes.
> Huh, that chart is dominated by Python. 30% of programmers not doing Python wanted to start.
Because people believe that if they learn python they'll instead land a cool ML job which pays 100k more than what they have ? Like, I have my girlfriend who literally does not know anything about programming ask me if I could teach her python because she saw an ad about it (for a "land a CS job in 3 months" type of thin). That necessarily causes some inertia for Python, not enough to be more hyped than rust, but enough to influence results.
Anecdotes about what non-programmers believe aren't very applicable to Stack Overflow's survey of programmers. Still though, you end up with a weird conclusion where you believe "hype" drives the Loved statistic (one based on people's real experience) but not so much the Wanted statistic (based on what they heard) when by definition that isn't how hype works.
Think about the 2016 movie "Deadpool". Deadpool is a one joke character. There are no major Marvel characters in the movie, the stakes are low, there is no connection to the larger Marvel Cinematic Universe storyline. Reynolds has played this exact character before, in a movie which nobody liked. There was inevitable fan hype before it came out, "the merc with a mouth" sells comics and those fans are going to see the movie whatever, but fans don't know anything right? But, both critics and audiences seem to have liked this movie, a lot more than its studio expected. It made a lot of money and got pretty great reviews from most quarters. Neither of those things is hype, that's called success.
The Loved result looks exactly like what I see what I talk to people about Rust. Those who haven't heard of it of course aren't looking to write Rust, for those who've only heard of it, it's on their "things to check out" list with Go, and maybe Swift but it doesn't jump out at them. But among those who've written Rust you see a spike of enthusiasm, "Hey this is really good".
When I learned Go, I filed the acquired skills away. "This may be useful in some future scenario, but I have meanwhile ceased to be employed writing TLS bit-banging code for which I thought Go would be the best option, so, never mind now"
But when I learned Rust the first thought was "I should write more Rust". I immediately rewrote the smallest interesting C project I ever published and pushed that to GitHub. Then I wrote a bunch of code to check some of my intuitions about Rust's safety when used by people who are unreasonable (misfortunate, a collection of perverse implementations of safe Rust traits). Then I started writing a triplestore, which I'd done twice before in C -- a friend and colleague left programming and went into management after his third one, so cross fingers that doesn't happen to me.
Could also be referring to the cryptography library which added a Rust dependency, which caused pain for people running ansible, and other downstream users.
> Could also be referring to the cryptography library which added a Rust dependency, which caused pain for people running ansible, and other downstream users.
This isn't true for the overwhelming majority of deployments, since pyca/cryptography was/is distributed as a pre-built wheel. There is no runtime dependency on Rust in pyca/cryptography; the only downstream change is that packagers are required to have a Rust toolchain.
Just because there are wheels doesn't mean you'll never need to install Rust to install Cryptography. Just this week I got a "you need Rust" build error inside of a Docker container due to Pip not being able to find a wheel for the specific Python version used by the container. Fortunately, the version of Cryptography that was being pulled in supported that environment variable to use the C version instead, so I was spared from having to do a ton of work.
For now. One day I'll wake up and a future version of this container will refuse to build because whatever library pulled pyca/cryptography in got upgraded and now needs the Rust-only version.
The latter point is an important one. Rust as a language for libraries cannot work the same way as rust as a language for applications. For the latter it is OK to depend on the cargo toolchain and be opinionated when it comes to things like dynamic linking. For the former you ideally want support in any common compiler (clang, GCC) and as little dependencies and constraints as possible.
It's also a problem for application themselves IMO. Cargo combined with dependency pinning brings most of the disadvantages of similar environments with centralized package handling: the ease of adding package dependencies increases the number of dependencies themselves very rapidly. Overly-narrow version pinning forces per-package lockstep updates of the various dependencies, which in turn means multiple versions of the same package will get pulled two-or-three levels deep. This ensures each single rust package or update you build is almost a guaranteed rebuild-the-world even with a shared cache dir. And we're not touching the problem of building projects where rust is only part of it: annoying if you want to link other stuff into rust, even more so if you want to do the opposite.
I'm following a couple of projects that transitioned to rust, and my experience as a contributor is not stellar. A minimal rust project can take hundreds of mb of disk space just in dependencies, and double that for build space. The solution for some has been providing build bots, but again doesn't help me as a contributor, where I need to be able to rebuild from source.
This has on me almost the same effect produced by large projects: I only contribute if I have a large vested interest in the package, otherwise I just avoid because it's time consuming.
True in principle. But once you divorce yourself from Cargo, almost all resources and advice when it comes to building Rust programs go out the window. I love the language, and I love the community, but the attitude of "rustup nightly and Cargo, or bust" is bit terrifying.
As a noob, I had to wade through endless "but don't do that, just get the latest from Cargo!!!!" when I asked for advice on how to use my system-provided Rust packages for my project.
For what it's worth, system-provided can be arbitrary and vary widely between systems.
Moreover, in the Python world a distinction is made between "software that runs your system" and "software that you use for development"; maybe Rust people think similarly.
> For what it's worth, system-provided can be arbitrary and vary widely between systems.
Sure. That's inherent with software.
> Moreover, in the Python world a distinction is made between "software that runs your system" and "software that you use for development"; maybe Rust people think similarly.
Is there any sort of detailed documentation on how to use rustc directly in more complex ways? I imagine that any begginer's text will mostly treat cargo and not such special usage scenarios as this one.
Oh, it's actually easy to find on https://www.rust-lang.org/learn in the second section of the page. Most likely reason for me not having found it is that the last time I looked at that page, I wasn't looking for the rustc book specifically, and that now that I wondered where rustc's user-level documentation was, the book doesn't appear when I googled simply "rustc documentation". You have to google specifically "rustc book" and that didn't occur to me, as I was expecting some kind of manpage instead.
Thanks for clearing that up. Say, you know a lot of things about the rust ecosystem. Do you have any insight into how hypothetical rust driver code would integrate with the rest of the kernel build process? I imagine it would have to use llvm-rustc, or is gccrs ready for the job?
Would it be emitting objects that gcc/ld would link against?
Well, this is a working patch set, so, yes, though I have no direct involvement and haven't read all of it yet.
> I imagine it would have to use llvm-rustc,
Yes, it uses upstream rustc.
> is gccrs ready for the job?
Not yet. They're making great progress, but major chunks of the language are still missing. They'll get there.
Using upstream rustc isn't a blocker for new code implementing drivers, but it is a blocker for getting into older drivers, or the kernel. The blocker is platform support; or at least, the current largest blocker, and either rustc will gain it, or gccrs will be good enough to compile this codebase. We'll see :)
> Would it be emitting objects that gcc/ld would link against?
Yep, it emits output that looks like any C compiler's output, you link 'em together and you're good.
If you manage to compile the kernel with clang, in theory you can even get cross-language LTO; this is working in Firefox, but I'm not sure if anyone's tried it with the kernel yet or not.
> If you manage to compile the kernel with clang, in theory you can even get cross-language LTO.
Note, that there is still bunch of unsolved issues [1] in LLVM to allow all building all of Linux kernel. The efforts had stopped for some years but recently gained steam again, though a lot of time was wasted before.
Even if you're compiling dynamic libraries/.so's with a C ABI to be consumed through a C FFI from another language? That seems to be fairly common use case these days, and I don't see those issues there (unless I've missed something, which is of course very possible).
Rust itself largely supports this (via the 'cdynlib' project type), however for many Rust crates it either does not make sense to export a pure C API/ABI (i.e. the "crates" are purely generic code that's going to be instantiated with build-specific types, so there's no predefined API beyond that single build) or they just don't bother to enable that use case.
Well, sure, language-level API (with a type system identical to Rust's) will be a problem, but as long as there's only one compiler, there doesn't seem to be a problem -- yet. I'm mildly wondering if this isn't a chicken and egg problem of sorts.
Exactly, it's subjective fun. Some people like languages where compiler errors point you in the right direction, expressive features like ergonomic strings, closures, syntactic macros, etc.
Other folks are masochists and get their jollies from cryptic errors, segfault debugging, pouring through valgrind traces, and of course manually managing memory. If you aren't suffering, are you really programming?
I tease, but not entirely. Sometimes I do in fact enjoy the challenge of C programming. But it's squarely type 2 fun.
The amount of segfault debugging and managing memory is roughly linearly dependent on how much you pretend to not be programming in C but in an object-oriented programming.
Those languages give you syntax and/or runtime tools to get away with badly structured programmings (allocating / deallocating stuff like mad, lots of implicit behaviour). C is not like that. It wants you to think and learn how to structure programs (this is transferable knowledge, i.e. it gets much better with time).
Based on Debian's popcon numbers (https://popcon.debian.org/), Rust has support for 99.99% of users. The reality is that if you're using architectures at that level of reality, no one actually supports your architecture, so you're reliant on locally patching software and hoping things work.
(Note that MIPS is a Tier 2 support for Rust, which is a commitment to keep it building but does not obligate running tests for every checkin).
Indeed. The vast majority of computing environments are covered by existing rustc support. However, people in weird retrocomputing environments are more or less existentially threatened by Rust.
In my personal experience (since I wanted to see how big of a problem this is), I looked into bringing up Rust as a cross-compiler for Mac OS 9. This requires a compiler that can emit PowerPC machine code, as well as a toolchain that can handle XCOFF objects and classic Mac OS's strange resource formats (if you ever wondered why Win32 has resources, that's why). Retro68k provides such a toolchain (albeit GCC based), and I wrote a rustc target file to make it spit out XCOFF objects in PowerPC format.
Then I got hit with a bunch of llvm assertions about unimplemented functionality in it's XCOFF generator and gave up.
Less anecdotally, the ArcaOS people (responsible for trying to keep IBM's freakshow fork of Windows and DOS alive) and TenFourFox both have abandoned attempts to maintain Firefox forks for OS/2 and old Mac OS X (respectively), specifically because of the Quantum update making Rust a requirement to build Firefox.
I heard Rust did merge in a GCC backend, which might help some of these retrocomputing projects... but there are platforms out there where the primary (or only) development environment is a proprietary C compiler. (e.g. Classzilla uses Metroworks to provide old Mozilla on Mac OS 9) I'm starting to wonder if some kind of "Rust to C" backend might be useful for these cases...
Linux also can't abandon hardware support for some of these weird environments, either. So until and unless Rust-with-GCC can compile on every environment Linux does, we aren't going to see anything more than Rust drivers.
> Linux also can't abandon hardware support for some of these weird environments, either.
Can't because why though? I agree it shouldn't abandon them just to get more Rust, but there are other reasons some of the crustier less used platforms go away.
> Linux also can't abandon hardware support for some of these weird environments, either.
But why not? Are we obligated to support everything forever? If it’s hardly being used, and is starting to get in the way of safety and correctness improvements, why can’t we drop something old, arcane and unused?
Counterpoint: not all of this is actually going unused. Yes, Debian popcon is going to show the vast majority of users on x86 and ARM; but that is primarily consumer use cases.
When you get into embedded, you will start to see all sorts of weird arcane setups that actual businesses rely upon. Case in point: this commercial kitchen appliance that is actually a DOS PC built with modern parts. [0]
Not to mention the startlingly high number of large businesses running off of IBM server hardware. Much of that is actually legacy stuff that's been rebranded and massively upgraded over the years. A company that bought into System/360 in the 80s or AS/400 in the early 90s will almost certainly have backwards-compatible zSeries or System i hardware running literally 30-40 year old programs.
Point is, there's lots of business critical crap running on things other than x86 or ARM. I only used retrocomputing as an example because I had a good anecdote for it. Businesses treat computers as if retrocomputing was also somehow mission critical and they pay handsomely for the privilege.
> will almost certainly have backwards-compatible zSeries or System i hardware running literally 30-40 year old programs.
But like, isn’t that a them problem? If you want to calcify your compute layer, don’t be surprised when the rest of the industry moves on, and possibly does things that aren’t compatible anymore? If they want to keep running that software, I think it’s their responsibility to either evolve their software to keep up, or deal with the fact that they’ll have to run their own old version/fork when the time comes.
If they contribute to the kernel, I would have thought their perspective would have been represented on one of these threads by now, as they seem to get posted at pretty much every event hahaha. Maybe they do and I totally missed it as well.
But it is used by retrocomputing enthusiasts. Linux has been supported by them for the platforms they care. They gladly hold up the bar “now you have to support Linux yourself”, with C compilers already existing and being supported by someone else. With Rust becoming a build-time dependency, things suddenly turn into “you're not getting any Linux until you port rustc to your platform and then make sure it's working there at all times”.
So everyone is not getting a more secure/safe system because a small minority wants to run Linux on old computers?
That does not sound fair to me. Why can't they not use old versions of Linux as well.
In what way are you more limited by the compiler in Rust than with C? Just write "unsafe" and you're off to the races.
Writing correct C is very hard and most definitely not fun. It's like juggling with with 7 balls and if you drop one you'll be shot. C is defined for a weirdo abstract machine that doesn't match what computers really do, and when people apply their intuition and knowledge about computers to their C programs "because C is low-level", it's a crapshoot whether they will trigger undefined behavior and the compiler goes off the rails with wild optimizations.
If I designed a low-level language I would enable such optimizations by making it easy to communicate your precise intent to the compiler. Not by making the standard a minefield of undefined behaviors.
> Just write "unsafe" and you're off to the races.
I'm pretty sure I mention this in the LWN comments, but, since it gets repeated so often the contradiction might as well be repeated as well:
No. Unsafe Rust only gets to do three things that aren't related to the "unsafe" keyword itself. It can dereference raw pointers, it can access C-style unions, it can mutate global static variables.
That's everything. Your C program is free to define x as an array with four elements and then access x[6] anyway - but Rust deliberately cannot do that. Not in Safe Rust, but also not in Unsafe Rust either. Writing "unsafe" doesn't mean "Do this anyway" it only unlocks those three specific things I mentioned, and so sure enough x[6] is still not allowed because that's a buffer overflow.
In fact by default the Rust compiler would warn you, if you write unsafe { foo[z] = 0; } that unsafe isn't doing anything useful here and you should remove it. That array dereference either is or, if z is small enough, is not, an overflow, and either way unsafe makes no difference.
> Your C program is free to define x as an array with four elements and then access x[6] anyway - but Rust deliberately cannot do that. Not in Safe Rust, but also not in Unsafe Rust either.
Woot?
fn main() {
let a = [0, 1, 2];
let _b = [42, 42, 42];
println!("{}", unsafe{a.get_unchecked(6)});
}
Calling get_unchecked on this array ends up as get_unchecked on the slice containing that whole array, which ends up as as get_unchecked on the slice index, and in the end it is...
Dereferencing a raw pointer. One of the three specific things I said unsafe Rust can in fact do.
This is not a "funny syntax" thing, a[6] is the idiomatic and obvious way to express this in Rust, and, it isn't allowed because it's a buffer overflow. Whereas a[6] is also the idiomatic and obvious way to express this in C and the result is Undefined Behaviour.
The claim was about `x[6]`, which does not appear in your program. The point is that `[]` is always bounds-checked, and the bounds-checking cannot be opted out of even with `unsafe`.
By explicitly doing it so, in an operation that is easy to grep for, or in the case of a binary library, search for the symbol during the linking phase.
Something that is impossible to validate in C, unless one is using a custom compiler, like Apple is doing for iBoot firmware.
> By explicitly doing it so, in an operation that is easy to grep for, or in the case of a binary library, search for the symbol during the linking phase.
Most of the time, these operations will be inlined, so they will already be gone by the time it gets to the linker. The compiler phase is the latest point where they are still visible.
>No. Unsafe Rust only gets to do three things that aren't related to the "unsafe" keyword itself.
>[...]
>Your C program is free to define x as an array with four elements and then access x[6] anyway - but Rust deliberately cannot do that. Not in Safe Rust, but also not in Unsafe Rust either.
>[...]
>In fact by default the Rust compiler would warn you, if you write unsafe { foo[z] = 0; } that unsafe isn't doing anything useful here and you should remove it. That array dereference either is or, if z is small enough, is not, an overflow, and either way unsafe makes no difference.
Since the conversation is claiming that you can do everything in Rust that you can do in C, I want to provide some counter-nuance. :) I am guessing what people actually mean is that all the operations you want to do can be done via unsafe Rust somehow, and yes, you can do that. But also yes, it is not literally "just write 'unsafe'". You do need to use raw pointers.
For instance, if you want to overflow a buffer intentionally,
fn main() {
let mut a = [1, 2, 3, 4];
let b = [5, 6, 7, 8];
let ptr: *mut i32 = &mut a[0];
unsafe { ptr.add(5).write(999); }
println!("{:?}", b);
}
(Note that this is not just extremely platform-specific and compiler-specific about whether a is in front of b or vice-versa, it is straight-up Undefined Behavior because you write past the end of an object... but the equivalent C code is also Undefined Behavior, and subject to the same LLVM optimizations. So if you were happy with the corresponding C code, this is the equivalent Rust.)
If you really, really want, you can write your own UnsafeSlice type that does the unsafe stuff internally and exposes the standard indexing operator, which would make foo[z] actually accept arbitrary indices just like in C. But you shouldn't. https://play.rust-lang.org/?version=stable&mode=debug&editio...
(Among other things, a code reviewer should be suspicious of your use of "unsafe" in the internals of a thing without stating why the higher-level abstraction is safe, and in fact the abstraction is wildly unsafe here, so it's bad style to write code that launders the unsafety, so to speak. In the Rust for Linux patches, there are "SAFETY" comments above each use of "unsafe" defending their logical safety.)
UnsafeSlice is a terrible idea, but let us at least give it the normal ergonomics of a wrapper type so we can say UnsafeSlice(&mut a) rather than needing curly braces to make one :)
You don't need anymore than that to match what C offers, C as in standard C, not in the folklore of C as a portable assembler. In fact you're freer in Rust than in C, because there is a simple, defined way to type-pun memory in Rust.
> Your C program is free to define x as an array with four elements and then access x[6] anyway
It's free to do anything, but you can't be sure that it will do that, because of the utterly weak specification.
Note that the C version of this might compile but the result is undefined, so it could be what you intuitively thought what would happen, or anything else.
> - It is not fun but frustrating to work in Rust, and contrary to C, you are limited by the language/compiler on what you can do.
That depends on the beholder.
Developers that grew up in Algol derived languages like Modula, Pascal and Ada, feel Rust brings fresh wind into systems programming, with safety considerations we thought it were lost forever and only partially covered by C++.
Then there are the others like Kernighan, that feel that languages like Pascal, sorry Rust, is programming with a straightjacket and better not change anything.
I supposed that's why they're writing drivers not crypto libraries.
The sample in the article is Binder. Which is only needed for platforms Android supports. Which are all Tier 1 & 2 Rust targets. (As is MIPS too).
If you wrote a driver for HW that only appears on 1 or 2 CPU architectures then targeting is less of an issue. I would not be surprised if lots of drivers in Linux only work on x86 anyway.
> you are limited by the language/compiler on what you can do
You can translate most C to Rust automatically (https://c2rust.com/) and there's nothing that I'm aware of that can't be done in Rust via unsafe and transmute. (technically some things like specific label jumps can't be translated, but all of those can be rewritten to other constructs) Do you have some specific cases in mind?
That's "translate c to rust" in the same way as translating English to Japanese by looking up the kanji for an English word, and replacing it word by word. Why not just generate bindings at that point?
I'm neither recommending to use it, nor saying it's a good quality result. I'm addressing the "you are limited by the language/compiler on what you can do" part, which for real code is not the case in my experience.
Ah, yeah I see your point. I suppose that's a useful shim to having full Rust interop with a pre-existing C codebase as you convert, or if you have a mature lib you just want to include wholesale in Rust.
But yeah bottom line, nowhere does Rustc "stop" you from doing things. Just strongly discourage :)
Here’s a blog series about rewriting some classic C in Rust, first unsafely and then safely, and getting some performance wins along the way: http://cliffle.com/p/dangerust/
> It is not fun but frustrating to work in Rust, and contrary to C, you are limited by the language/compiler on what you can do.
People who have had prior exposure to C tend to find Rust frustrating.
People who have not had prior exposure to C tend to find it fun, and an average programmer of this sort can fearlessly write bare-metal code that beats the code of the best C programmers writing in C in safety and rivals it in performance.
Systems programming has been fundamentally changed by Rust. It's become as accessible and democratized as web programming, no longer the sole province of a cadre of elite C programmers.
Indeed it was (/s), that's why not even Servo has been shipped. Stop preaching programming languages without results. Because of their benevolent dictators, Linux (Linus Torvalds), Clojure (Rich Hikey), Zig (Andrew Kelley) and Python (Guido van Rossum), the development process of these has been less democratic, and this is precisely why the results are so good. A good design is not a democratic consensus. Look how Rust and C++ ended up, a big pile of complexity. Even Scala 3 was saved with an intervention from Martin Odersky to clean up the language, with huge backlash from the community.
Servo was not meant to ship (at least by Mozilla, when there was paid staff working on Servo), it was a research vessel and Rust components now shipping in Firefox (Stylo, WebRender) started life in Servo.
- It is not fun but frustrating to work in Rust, and contrary to C, you are limited by the language/compiler on what you can do.
- building/compiling the kernel is not trivial, and you will add a new huge dependency that you have to deal with to build the kernel for whatever target. Let's suppose you want to build for MIPS, then you need to have Rust supporting MIPS.
As an exemple, there is a common package in python that decided to start having their module in Rust instead of C. Now a lot of users of the module are pissed off, with good reason, because you can't build/install the module anymore in a little bit older or non conventional distributions.