That is so good to hear. I feel Rust support came a long way in the past two years and you can do a functional Rust kernel module now with almost no boilerplate.
Removing the "experimental" tag is certainly a milestone to celebrate.
I'm looking forward to distros shipping a default kernel with Rust support enabled. That, to me, will be the real point of no return, where Rust is so prevalent that there will be no going back to a C only Linux.
A few distros already do that. Of the top of my head, both NixOS and Arch enable the QR code kernel panic screen, which is written in Rust. Granted, those are rather bleeding edge, but I know a few more traditional distros have that enabled (I _think_ fedora has it? But not sure).
For now there aren't many popular drivers that use Rust, but there are currently 3 in-development GPU drivers that use it, and I suspect that when those get merged that'll be the real point of no return:
I suspect the first one of those to be actually used in production will be the Tyr driver, especially since Google's part of it and they'll probably want to deploy it on Android, but for the desktop (and server!) Linux use-case, the Nova driver is likely to be the major one.
I believe Google has developed a Rust implementation of the binder driver which was recently merged, and they are apparently planning to remove the original C implementation. Considering binder is essential for Android that would be a major step too.
Arch never failed me. The only time I remember it panicked was when I naively stopped an upgrade in the middle which failed to generate initramfs but quickly fixed it by chroot'ing and running mkinitcpio. Back up in no time
wasn't there like a drive by maintainer rejection of something rust related that kind of disrupted the asahi project ? i can't say i followed the developments much but i do recall it being some of that classic linux kernel on broadway theater. i also wonder if that was a first domino falling of sorts for asahi, i legitimately can't tell if that project lives on anymore
Yes, that's the entry of the rabbit hole, and reader is advised to dig their own tunnel.
I remember following the tension for a bit. Yes, there are other subjects about how things are done, but after reading it, I remember framing "code quality" as the base issue.
In high-stakes software development environments, egos run high generally, and when people clash and doesn't back up, sparks happen. If this warning is ignored, then something has to give.
If I'm mistaken, I can enjoy a good explanation and be gladly stand corrected.
This is what happened here. This is probably the second or third time I witness this over 20+ years. Most famous one was over CPU schedulers, namely BFS, again IIRC.
Some pretext: I'm a Rust skeptic and tired of Rust Evangelism Task Force and Rewrite in Rust movements.
---
Yes. I remember that message.
Also let's not forget what marcan said [0] [1].
In short, a developer didn't want their C codebase littered with Rust code, which I can understand, then the Rust team said that they can maintain that part, not complicating his life further (Kudos to them), and the developer lashing out to them to GTFO of "his" lawn (which I understand again, not condone. I'd have acted differently).
This again boils down to code quality matters. Rust is a small child when compared to the whole codebase, and weariness from old timers is normal. We can discuss behaviors till the eternity, but humans are humans. You can't just standardize everything.
Coming to marcan, how he behaved is a big no in my book, too. Because it's not healthy. Yes, the LKML is not healthy, but this is one of the things which makes you wrong even when you're right.
I'm also following a similar discussion list, which has a similar level of friction in some matters, and the correct thing is to taking some time off and touching grass when feeling tired and being close to burnout. Not running like a lit torch between flammable people.
One needs to try to be the better example esp. when the environment is not in an ideal shape. It's the hardest thing to do, but it's the most correct path at the same time.
One perspective is that Rust appears to be forced into the Linux kernel through harassment and pressure. Instead of Rust being pulled, carefully, organically and friendly, and while taking good care of any significant objections. Objections like, getting the relevant features from unstable Rust into stable Rust, or getting a second compiler like gccrs (Linux kernel uses gcc for C) fully up and running, or ensuring that there is a specification (the specification donated by/from Ferrous Systems, might have significant issues), or prioritizing the kernel higher than Rust.
If I had been enthusiastic about Rust, and wanted to see if it could maybe make sense for Rust to be part of the Linux kernel[0], I would probably had turned my attention to gccrs.
What is then extra strange is that there have been some public hostility against gccrs (WIP Rust compiler for gcc) from the rustc (sole main Rust compiler primarily based on LLVM) camp.
It feels somewhat like a corporate takeover, not something where good and benign technology is most important.
And money is at stake as well, the Rust Foundation has a large focus on fundraising, like how their progenitors at Mozilla/Firefox have or had a large focus on fundraising. And then there are major Rust proponents who openly claim, also here on Hacker News, that software and politics are inherently entwined.
[0]: And also not have it as a strict goal to get Rust into the kernel, for there might be the possibility that Rust was discovered not to be a good fit; and then one could work on that lack of fit after discovery and maybe later make Rust a good fit.
>One perspective is that Rust appears to be forced into the Linux kernel through harassment and pressure. Instead of Rust being pulled, carefully, organically and friendly, and while taking good care of any significant objections. Objections like, getting the relevant features from unstable Rust into stable Rust, or getting a second compiler like gccrs (Linux kernel uses gcc for C) fully up and running, or ensuring that there is a specification (the specification donated by/from Ferrous Systems, might have significant issues), or prioritizing the kernel higher than Rust.
On the flip side there have been many downright sycophants of only C in the Linux kernel and have done every possible action to throttle and sideline the Rust for Linux movement.
There have been multiple very public statements made by other maintainers that they actively reject Rust in the kernel rather than coming in with open hearts and minds.
Why are active maintainers rejecting parts of code that are under the remit of responsibility?
I think the main issue is that it never was about how rust programmers should write more rust. Just like a religion it is about what other people should do. That's why you see so many abandoned very ambitious rust projects to tackle x,y or z now written in C to do them all over again (and throw away many years of hardening and bug fixes). The idea is that the original authors can be somehow manipulated into taking these grand gifts and then to throw away their old code. I'm waiting for the SQLite rewrite in rust any day now. And it is not just the language itself, you also get a package manager that you will have to incorporate into your workflow (with risks that are entirely its own), new control flow models, terminology and so on.
Rust should have done exactly one thing and do that as good as possible: be a C replacement and do that while sticking as close as possible to the C syntax. Now we have something that is a halfway house between C, C++, JavaScript (Node.js, actually), Java and possibly even Ruby with a syntax that makes perl look good and with a bunch of instability thrown in for good measure.
It's as if the Jehova's witnesses decided to get into tech and to convince the world of the error of its ways.
>I think the main issue is that it never was about how rust programmers should write more rust. Just like a religion it is about what other people should do.
>Rust should have done exactly one thing and do that as good as possible: be a C replacement and do that while sticking as close as possible to the C syntax.
Personally, I observe that Rust is forced everywhere in the Linux ecosystem. One of my biggest concerns is uutils, mostly because of the permissive license it bears. The Linux kernel and immediate userspace shall be GPL licensed to protect the OS in my opinion.
I have a personal principle of not using LLVM-based languages (equal parts I don't like how LLVM people behave against GCC and I support free software first and foremost), so I personally watch gccrs closely, and my personal ban on Rust will be lifted the day gccrs becomes an alternative compiler.
This brings me to the second biggest reservation about Rust. The language is moving too fast, without any apparent plan or maturing process. Things are labeled unstable, there's no spec, and apparently nobody is working on these very seriously, which you also noted.
I don't understand why people are hostile against gccrs? Can you point me to some discussions?
> It feels somewhat like a corporate takeover, not something where good and benign technology is most important.
As I noted above, the whole Rust ecosystem feels like it's on a crusade, esp. against C++. I write C++, and I play with pointers a lot, and I understand the gotchas, and also the team dynamics and how it's becoming harder to write good software with larger teams regardless of programming language, but the way Rust propels itself forward leaves a very bad taste in the mouth, and I personally don't like to be forced into something. So, while gccrs will remove my personal ban, I'm not sure I'll take the language enthusiastically. On the other hand, another language Rust people apparently hate, Go, ticks all the right boxes as a programming language. Yes, they made some mistakes and turned back from some of them at the last moment, but the whole ordeal looks tidier and better than Rust.
In short, being able to borrow-check things is not a license to push people around like this, and they are building themselves a good countering force with all this enthusiasm they're pumping around.
Oh, I'll only thank them for making other programming languages improve much faster. Like how LLVM has stirred GCC devs into action and made GCC a much better compiler in no time.
> In short, a developer didn't want their C codebase littered with Rust code
from reading those threads at the time, the developer in question was deliberately misunderstanding things. "their" codebase wasn't to be "littered" with Rust code - it wasn't touched at all.
<< Alex Gaynor recently announced he is formally stepping down as one of the maintainers of the Rust for Linux kernel code with the removal patch now queued for merging in Linux 6.19. Alex Gaynor was one of the original developers to experiment with Rust code for Linux kernel modules. He's drifted away from Rust Linux kernel development for a while due to lack of time and is now formally stepping down as a listed co-maintainer of the Rust code. After Wedson Almeida Filho stepped down last year as a Rust co-maintainer, this now leaves Rust For Linux project leader Miguel Ojeda as the sole official maintainer of the code while there are several Rust code reviewers. >>
I never used linux because it was built using C++. Never have a cared what language the product was built it. The Rust community however wants you to use a product just because it's implemented in Rust, or atleast as one of the selling points. For that reason I decided to avoid any product that advertises using Rust as a selling point, as much as I can. Was planning to switch from Mac to a Linux machine, not anymore, I'm upgrading my mac happily.
I can't find the movie, but there was a guy angry that stated that he should not have to learn Rust, nobody should. And Rust should not be in the kernel.
What they mean is that the Linux kernel has a long-standing policy to keep the whole kernel compilable on every commit, so any commit that changes an internal API must also fix up _all_ the places where that internal API is used.
While Rust in the kernel was experimental, this rule was relaxed somewhat to avoid introducing a barrier for programmers who didn't know Rust, so their work could proceed unimpeded while the experiment ran. In other words, the Rust code was allowed to be temporarily broken while the Rust maintainers fixed up uses of APIs that were changed in C code.
I guess in practice you'd want to have Rust installed as part of your local build and test environment. But I don't think you have to learn Rust any more (or any less) than you have to learn Perl or how the config script works.
As long as you can detect if/when you break it, you can then either quickly pick up enough to get by (if it's trivial), or you ask around.
The proof of the pudding will be in the eating, the rust community better step up in terms of long term commitment to the code they produce because that is the thing that will keep this code in the kernel. This is just first base.
Learn rust to a level where all cross language implications are understood, which includes all `unsafe` behaviour (...because you're interfacing with C).
Rust's borrowing rules might force you to make different architecture choices than you would with C. But that's not what I was thinking about.
For a given rust function, where you might expect a C programmer to need to interact due to a change in the the C code, most of the lifetime rules will have already been hammered out before the needed updates to the rust code. It's possible, but unlikely, that the C programmer is going to need to significantly change what is being allocated and how.
There is, I understand, an expectation that if you do make breaking changes to kernel APIs, you fix the callers of such APIs. Which has been a point of contention, that if a maintainer doesn't know Rust, how would they fix Rust users of an API?
The Rust for Linux folks have offered that they would fix up such changes, at least during the experimental period. I guess what this arrangement looks like long term will be discussed ~now.
Without a very hard commitment that is going to be a huge hurdle to continued adoption, and kernel work is really the one place where rust has an actual place. Everywhere else you are most likely better off using either Go or Java.
I'd say no, access to a larger pool of programmers is an important ingredient in the decision of what you want to write something in. Netscape pre-dated Java which is why it was written in C/C++ and that is why we have rust in the first place. But today we do have Java which has all of the rust safety guarantees and then some, is insanely performant for network code and has a massive amount of mindshare and available programmers. For Go the situation is a bit less good but if you really need that extra bit of performance (and you almost never really do) then it might be a good choice.
> But today we do have Java which has all of the rust safety guarantees and then some, is insanely performant for network code and has a massive amount of mindshare and available programmers.
I'm not entirely convinced that Java has much mindshare among system programmers. During my 15 years career in the field I haven't heard "Boy I wish we wrote this <network driver | user-space offload networking application | NIC firmware> in Java" once.
I've seen plenty of networking code in Java, including very large scale video delivery platforms, near real time packet analysis and all kinds of stuff that I would have bet were not even possible in Java. If there is one thing that I'm really impressed by then it is how far Java has come performance wise, from absolutely dog slow to being an insignificant fraction away from low level languages. And I'm not a fan (to put it mildly) so you can take that as 'grudging respect'.
The 'factory factory' era of Java spoiled the language thoroughly for me.
Absolutely. Again, not a fan of the language. But you can transcode 10's of video streams on a single machine into a whole pile of different output resolutions and saturate two NICs while you're at it. Java has been 'fast enough' for almost all purposes for the last 10 years at least if not longer.
The weak point will always be the startup time of the JVM which is why you won't see it be used for short lived processes. But for a long lived process like a browser I see no obstacles.
No, Rust is in the kernel for driver subsystems. Core linux parts can't be written in Rust yet for the problem you mention. But new drivers *can* be written in Rust
Linux, but not Rust: {'nios2', 'microblaze', 'arc', 'openrisc', 'parisc', 's390', 'alpha', 'sh'}
Rust, but not Linux: {'avr', 'bpfel', 'amdcgn', 'wasm32', 'msp430', 'bpfeb', 'nvptx', 'wasm64'}
Personally, I've never used a computer from the "Linux, but not Rust" list, although I have gotten close to a DEC Alpha that was on display somewhere, and I know somebody who had a Sega Dreamcast (`sh`) at some point.
Well, GCC 15 already ended support for the nios2 soft-core. The successor to it is Nios V which runs RISC-V. If users want us update the kernel, they'll also need to update their FPGA.
Microblaze also is a soft-core, based on RISC-V, presumably it could support actual RISC-V if anyone cared.
All others haven't received new hardware within the last 10 years, everybody using these will already be running an LTS kernel on there.
It looks like there really are no reasons not to require rust for new versions of the kernel from now on then!
Is this the same NIOS that runs on FPGA? We wrote some code for it during digital design in university, and even an a.out was terribly slow, can't imagine running a full kernel. Though that could have been the fault of the hardware or IP we were using.
It’s an interesting list from the perspective of what kind of project Linux is. Things like PA-RISC and Alpha were dead even in the 90s (thanks to the successful Itanium marketing push convincing executives not to invest in their own architectures), and SuperH was only relevant in the 90s due to the Dreamcast pushing volume. That creates an interesting dynamic where Linux as a hobbyist OS has people who want to support those architectures, but Linux as the dominant server and mobile OS doesn’t want to hold back 99.999999+% of the running kernels in the world.
There was a time when it came to 64 bit support Alpha really was the only game in town where you could buy a server without adding a sixth zero to the bill. It was AMD, not Itanium that killed Alpha.
> Rust, which has been cited as a cause for concern around ensuring continuing support for old architectures, supports 14 of the kernel's 20-ish architectures, the exceptions being Alpha, Nios II, OpenRISC, PARISC, and SuperH.
> supports 14 of the kernel's 20-ish architectures
That's a lot better than I expected to be honest, I was thinking maybe Rust supported 6-7 architectures in total, but seems Rust already has pretty wide support. If you start considering all tiers, the scope of support seems enormous: https://doc.rust-lang.org/nightly/rustc/platform-support.htm...
Not having a recent GCC and not having GCC are different things. There may be architectures that have older GCC versions, but are no longer supported for more current C specs like C11, C23, etc.
I don't believe Rust for Linux use std. I'm not sure how much of Rust for Linux the GCC/Rust effort(s) are able to compile, but if it was "all of it" I'm sure we'd have heard about it.
Hmm seeing as this looks to be the path forward as far as inkernel Linux drivers are concerned , adnfor BSDs like FreeBSD that port said drivers to their own kernel.
Are we going to see the same oxidation of the BSD's or resistance and bifurcation.
I don't understand why. Working with hardware you're going to have to do various things with `unsafe`. Interfacing to C (the rest of the kernel) you'll have to be using `unsafe`.
In my mind, the reasoning for rust in this situation seems flawed.
I’ve done a decent amount of low level code - (never written a driver but I’m still young). The vast majority of it can be safe, and call into unsafe wrapped when needed. My experience is that a very very small amount of stuff actually needs unsafe and the only reason the unsafe C code is used is because it’s possible not because it’s necessary.
It is interesting how many very experienced programmers have not yet learned your lesson, so you may be young but you are doing very well indeed. Kudos.
The amount of unsafe is pretty small and limited to where the interfacing with io or relevant stuff is actually happening. For example (random selection) https://lkml.org/lkml/2023/3/7/764 has a few unsafes, but either: a) trivial structure access, or b) documented regarding why it's valid. The rest of the code still benefits.
I need to check the rust parts of the kernel, I presume there is significant amounts of unsafe. Is unsafe Rust a bit better nowadays? I remember a couple of years ago people complained that unsafe is really hard to write and very "un-ergonomic".
unsafe {
// this is normal Rust code, you can write as much of it as you like!
// (and you can bend some crucial safety rules)
// but try to limit and document what you put inside an unsafe block.
// then wrap it in a safe interface so no one has to look at it again
}
The idea is that most of the unsafe code to interact with the C API of the kernel is abstracted in the kernel crate, and the drivers themselves should use very little amount of unsafe code, if any.
I don't think unsafe Rust has gotten any easier to write, but I'd also be surprised if there was much unsafe except in the low-level stuff (hard to write Vec without unsafe), and to interface with C which is actually not hard to write.
Mostly Rust has been used for drivers so far. Here's the first Rust driver I found:
Drivers are interesting from a safety perspective, because on systems without an IOMMU sending the wrong command to devices can potentially overwrite most of RAM. For example, if the safe wrappers let you write arbitrary data to a PCIe network card’s registers you could retarget a receive queue to the middle of a kernel memory page.
> if the safe wrappers let you write arbitrary data to a PCIe network card’s registers
Functions like that can and should be marked unsafe in rust. The unsafe keyword in rust is used both to say “I want this block to have access to unsafe rust’s power” and to mark a function as being only callable from an unsafe context. This sounds like a perfect use for the latter.
> Functions like that can and should be marked unsafe in rust.
That's not how it works. You don't mark them unsafe unless it's actually required for some reason. And even then, you can limit that scope to a line or two in majority of cases. You mark blocks unsafe if you have to access raw memory and there's no way around it.
All logic, all state management, all per-device state machines, all command parsing and translation, all data queues, etc. Look at the examples people posted in other comments.
And yet, the Linux kernel's Rust code uses unstable features only available on a nightly compiler.
Not optimal for ease of compilation and building old versions of the Kernel. (You need a specific version of the nightly compiler to build a specific version of the Kernel)
> Not optimal for ease of compilation and building old versions of the Kernel. (You need a specific version of the nightly compiler to build a specific version of the Kernel)
It's trivial to install a specific version of the toolchain though.
The difference probably is that GCC extensions have been stable for decades. Meanwhile Rust experimental features have breaking changes between versions. So a Rust version 6 months from now likely won't be able to compile the kernel we have today, but a GCC version in a decade will still work.
you're joking because of the other frontpage story with Gemini 3 hallucinating hacker news 10 years in the future, but still lets keep the hallucinations to that page.
Other platforms don't have a leader that hates C++, and then accepts a language that is also quite complex, even has two macro systems of Lisp like wizardy, see Serde.
OSes have been being written with C++ on the kernel, since the late 1990's, and AI is being powered by hardware (CUDA) that was designed specifically to accomodate C++ memory model.
Also Rust compiler depends on a compiler framework written in C++, without it there is no Rust compiler, and apparently they are in no hurry to bootstrap it.
Yeah they’re all just tools. From a technical perspective, rust, C and C++ work together pretty well. Swift too. LLVM LTO can even do cross language inlining.
I don’t think C++ is going anywhere any time soon. Not with it powering all the large game engines, llvm, 30 million lines of google chrome and so on. It’s an absolute workhorse.
Not a system programmer -- at this point, does C hold any significant advantage over Rust? Is it inevitable that everything written in C is going to be gradually converted to safer languages?
C currently remains the language of system ABIs, and there remains functionality that C can express that Rust cannot (principally bitfields).
Furthermore, in terms of extensions to the language to support more obtuse architecture, Rust has made a couple of decisions that make it hard for some of those architectures to be supported well. For example, Rust has decided that the array index type, the object size type, and the pointer size type are all the same type, which is not the case for a couple of architectures; it's also the case that things like segmented pointers don't really work in Rust (of course, they barely work in C, but barely is more than nothing).
Can you expand on bitfields? There’s crates that implement bitfield structs via macros so while not being baked into the language I’m not sure what in practice Rust isn’t able to do on that front.
I'm not a rust or systems programmer but I think it meant that as an ABI or foreign function interface bitfields are not stable or not intuitive to use, as they can't be declared granularily enough.
C's bit-fields ABI isn't great either. In particular, the order of allocation of bit-fields within a unit and alignment of non-bit-field structure members are implementation defined (6.7.2.1). And bit-fields of types other than `_Bool`, `signed int` and `unsigned int` are extensions to the standard, so that somewhat limits what types can have bitfields.
That first sentence though. Bitfields and ABI alongside each other.
Bitfield packing rules get pretty wild. Sure the user facing API in the language is convenient, but the ABI it produces is terrible (particularly in evolution).
I would like a revision to bitfields and structs to make them behave the way a programmer things, with the compiler free to suggest changes which optimize the layout. As well as some flag that indicates the compiler should not, it's a finalized structure.
I'm genuinely surprised that usize <=> pointer convertibility exists. Even Go has different types for pointer-width integers (uintptr) and sizes of things (int/uint). I can only guess that Rust's choice was seen as a harmless simplification at the time. Is it something that can be fixed with editions? My guess is no, or at least not easily.
There is a cost to having multiple language-level types that represent the exact same set of values, as C has (and is really noticeable in C++). Rust made an early, fairly explicit decision that a) usize is a distinct fundamental type from the other types, and not merely a target-specific typedef, and b) not to introduce more types for things like uindex or uaddr or uptr, which are the same as usize on nearly every platform.
Rust worded in its initial guarantee that usize was sufficient to roundtrip a pointer (making it effectively uptr), and there remains concern among several of the maintainers about breaking that guarantee, despite the fact that people on the only target that would be affected basically saying they'd rather see that guarantee broken. Sort of the more fundamental problem is that many crates are perfectly happy opting out of compiling for weirder platform--I've designed some stuff that relies on 64-bit system properties, and I'd rather like to have the ability to say "no compile for you on platform where usize-is-not-u64" and get impl From<usize> for u64 and impl From<u64> for usize. If you've got something like that, it also provides a neat way to say "I don't want to opt out of [or into] compiling for usize≠uptr" and keeping backwards compatibility.
> ...not to introduce more types for things like uindex or uaddr or uptr, which are the same as usize on nearly every platform. ... there remains concern among several of the maintainers about breaking that guarantee, despite the fact that people on the only target that would be affected basically saying they'd rather see that guarantee broken.
The proper approach to resolving this in an elegant way is to make the guarantee target-dependent. Require all depended-upon crates to acknowledge that usize might differ from uptr in order to unlock building for "exotic" architectures, much like how no-std works today. That way "nearly every platform" can still rely on the guarantee with no rise in complexity.
> Is it something that can be fixed with editions? My guess is no, or at least not easily.
Assuming I'm reading these blog posts [0, 1] correctly, it seems that the size_of::<usize>() == size_of::<*mut u8>() assumption is changeable across editions.
Or at the very least, if that change (or a similarly workable one) isn't possible, both blog posts do a pretty good job of pointedly not saying so.
Personally, I like 3.1.2 from your link [0] best, which involves getting rid of pointer <=> integer casts entirely, and just adding methods to pointers, like addr and with_addr. This needs no new types and no new syntax, though it does make pointer arithmetic a little more cumbersome. However, it also makes it much clearer that pointers have provenance.
I think the answer to "can this be solved with editions" is more "kinda" rather than "no"; you can make hard breaks with a new edition, but since the old editions must still be supported and interoperable, the best you can do with those is issue warnings. Those warnings can then be upgraded to errors on a per-project basis with compiler flags and/or Cargo.toml options.
In what architecture are those types different? Is there a good reason for it there architecturally, or is it just a toolchain idiosyncrasy in terms of how it's exposed (like LP64 vs. LLP64 etc.)?
CHERI has 64-bit object size but 128-bit pointers (because the pointer values also carry pointer provenance metadata in addition to an address). I know some of the pointer types on GPUs (e.g., texture pointers) also have wildly different sizes for the address size versus the pointer size. Far pointers on segmented i386 would be 16-bit object and index size but 32-bit address and pointer size.
There was one accelerator architecture we were working that discussed making the entire datapath be 32-bit (taking less space) and having a 32-bit index type with a 64-bit pointer size, but this was eventually rejected as too hard to get working.
I guess today, instead of 128bit pointers we have 64bit pointers and secret provenance data inside the cpu, at least on the most recent shipped iPhones and Macs.
In the end, I’m not sure that’s better, or maybe we should have had extra large pointers again (in that way back 32bit was so large we stuffed other stuff in there) like CHERI proposes (though I think it still has secret sidecar of data about the pointers).
Would love to Apple get closer to Cheri. They could make a big change as they are vertically integrated, though I think their Apple Silicon for Mac moment would have been the time.
You mean in safe rust? You can definitely do self-referential structs with unsafe and Pin to make a safe API. Heck every future generated by the compiler relies on this.
There's going to be ~zero benefit of Rust safety mechanisms in the kernel, which I anticipate will be littered with the unsafe sections. I personally see this move not as much technical as I see it as a political lobbying one (US gov). IMHO for Linux kernel development this is not a good news since it will inevitably introduce friction in development and consequently reduce the velocity, and most likely steer away certain amount of long-time kernel developers too.
Sorry but what have I said wrong? The nature of code written in kernel development is such that using unsafe is inevitable. Low-level code with memory juggling and patterns that you usually don't find in application code.
And yes, I have had a look into the examples - maybe one or two years there was a significant patch submitted to the kernel and number of unsafe sections made me realize at that moment that Rust, in terms of kernel development, might not be what it is advertised for.
Right? Thank you for the example. Let's first start by saying the obvious - this is not an upstream driver but a fork and it is also considered by its author to be a PoC at best. You can see this acknowledged by its very web page, https://rust-for-linux.com/nvme-driver, by saying "The driver is not currently suitable for general use.". So, I am not sure what point did you try to make by giving something that is not even a production quality code?
Now let's move to the analysis of the code. The whole code, without crates, counts only 1500 LoC (?). Quite small but ok. Let's see the unsafe sections:
rnvme.rs - 8x unsafe sections, 1x SyncUnsafeCell used for NvmeRequest::cmd (why?)
nvme_mq/nvme_prp.rs - 1x unsafe section
nvme_queue.rs - 6x unsafe not sections but complete traits
nvme_mq.rs - 5x unsafe sections, 2x SyncUnsafeCell used, one for
IoQueueOperations::cmd second for AdminQueueOperations::cmd
In total, this is 23x unsafe sections/traits over 1500LoC, for a driver that is not even a production quality driver. I don't have time but I wonder how large this number would become if all crates this driver is using were pulled in into the analysis too.
The idea behind the safe/unsafe split is to provide safe abstractions over code that has to be unsafe.
The unsafe parts have to be written and verified manually very carefully, but once that's done, the compiler can ensure that all further uses of these abstractions are correct and won't cause UB.
Everything in Rust becomes "unsafe" at some lower level (every string has unsafe in its implementation, the compiler itself uses unsafe code), but as long as the lower-level unsafe is correct, the higher-level code gets safety guarantees.
This allows kernel maintainers to (carefully) create safe public APIs, which will be much safer to use by others.
C doesn't have such explicit split, and its abstraction powers are weaker, so it doesn't let maintainer create APIs that can't cause UB even if misused.
> I am not sure what point did you try to make by giving something that is not even a production quality code?
let's start by prefacing that 'production quality' C is 100% unsafe in Rust terms.
> Sorry, I am not buying that argument.
here's where we fundamentally disagree: you listed a couple dozen unsafe places in 1.5kLOC of code; let's be generous and say that's 10% - and you're trying to sell it as a bad thing, whereas I'm seeing the same numbers and think it's a great improvement over status quo ante.
> let's start by prefacing that 'production quality' C is 100% unsafe in Rust terms.
I don't know what one should even make from that statement.
> here's where we fundamentally disagree: you listed a couple dozen unsafe places in 1.5kLOC of code; let's be generous and say that's 10%
It's more than 10%, you didn't even bother to look at the code but still presented it, what in reality is a toy driver example, as something credible (?) to support your argument of me spreading FUD. Kinda silly.
Even if it was only that much (10%), the fact it is in the most crucial part of the code makes the argument around Rust safety moot. I am sure you heard of 90/10 rule.
The time will tell but I am not holding my breath. I think this is a bad thing for Linux kernel development.
> I don't know what one should even make from that statement.
it's just a fact. by definition of the Rust language unsafe Rust is approximately as safe as C (technically Rust is still safer than C in its unsafe blocks, but we can ignore that.)
> you didn't even bother to look at the code but still presented
of course I did, what I've seen were one-liner trait impls (the 'whole traits' from your own post) and sub-line expressions of unsafe access to bindings.
> technically Rust is still safer than C in its unsafe blocks
This is quite dubious in a practical sense, since Rust unsafe blocks must manually uphold the safety invariants that idiomatic Safe Rust relies on at all times, which includes, e.g. references pointing to valid and properly aligned data, as well as requirements on mutable references comparable to what the `restrict` qualifier (which is rarely used) involves in C. In practice, this is hard to do consistently, and may trigger unexpected UB.
Some of these safety invariants can be relaxed in simple ways (e.g. &Cell<T> being aliasable where &mut T isn't) but this isn't always idiomatic or free of boilerplate in Safe Rust.
It's great that the Google Android team has been tracking data to answer that question for years now and their conclusion is:
-------
The primary security concern regarding Rust generally centers on the approximately 4% of code written within unsafe{} blocks. This subset of Rust has fueled significant speculation, misconceptions, and even theories that unsafe Rust might be more buggy than C. Empirical evidence shows this to be quite wrong.
Our data indicates that even a more conservative assumption, that a line of unsafe Rust is as likely to have a bug as a line of C or C++, significantly overestimates the risk of unsafe Rust. We don’t know for sure why this is the case, but there are likely several contributing factors:
unsafe{} doesn't actually disable all or even most of Rust’s safety checks (a common misconception).
The practice of encapsulation enables local reasoning about safety invariants.
The additional scrutiny that unsafe{} blocks receive.
Yes it does, alongside C++, Rust isn't available everywhere.
There are industry standards based in C, or C++, and I doubt they will be adopting Rust any time soon.
POSIX (which does require a C compiler for certification), OpenGL, OpenCL, SYSCL, Aparavi, Vulkan, DirectX, LibGNM(X), NVN, CUDA, LLVM, GCC, Unreal, Godot, Unity,...
Then plenty of OSes like the Apple ecosystem, among many other RTOS and commercial endevours for embedded.
Also the official Rust compiler isn't fully bootstraped yet, thus it depends on C++ tooling for its very existence.
Which is why efforts to make C and C++ safer are also much welcomed, plenty of code out there is never going to be RIR, and there are domains where Rust will be a runner up for several decades.
I agree with you for the most part, but it's worth noting that those standards can expose a C API/ABI while being primarily implemented in and/or used by non-C languages. I think we're a long way away from that, but there's no inherent reason why C would be a going concern for these in the Glorious RIIR Future:tm:.
Every system under the Sun has a C compiler. This isn't remotely true for Rust. Rust is more modern than C, but has it's own issues, among others very slow compilation times. My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
There is a set of languages which are essentially required to be available on any viable system. At present, these are probably C, C++, Perl, Python, Java, and Bash (with a degree of asterisks on the last two). Rust I don't think has made it through that door yet, but on current trends, it's at the threshold and will almost certainly step through. Leaving this set of mandatory languages is difficult (I think Fortran, and BASIC-with-an-asterisk, are the only languages to really have done so), and Perl is the only one I would risk money on departing in my lifetime.
I do firmly expect that we're less than a decade out from seeing some reference algorithm be implemented in Rust rather than C, probably a cryptographic algorithm or a media codec. Although you might argue that the egg library for e-graphs already qualifies.
We're already at the point where in order to have a "decent" desktop software experience you _need_ Rust too. For instance, Rust doesn't support some niche architectures because LLVM doesn't support them (those architectures are now exceedingly rare) and this means no Firefox for instance.
> There is a set of languages which are essentially required to be available on any viable system. At present, these are probably C, C++, Perl, Python, Java, and Bash
Java, really? I don’t think Java has been essential for a long time.
Even Perl... It's not in POSIX (I'm fairly sure) and I can't imagine there is some critical utility written in Perl that can't be rewritten in Python or something else (and probably already has been).
As much as I like Java, Java is also not critical for OS utilities. Bash shouldn't be, per se, but a lot of scripts are actually Bash scripts, not POSIX shell scripts and there usually isn't much appetite for rewriting them.
I am yet to find one without it, unless you include Windows, consoles, or table/phone OSes, embedded RTOS, which aren't anyway proper UNIX derivatives.
I personally feel the Zig is a much better fit to the kernel. It's C interoperability is far better than Rust's, it has a lower barrier to entry for existing C devs and it doesn't have the constraints the Rust does. All whilst still bringing a lot of the advantages.
...to the extent I could see it pushing Rust out of the kernel in the long run. Rust feels like a sledgehammer to me where the kernel is concerned.
It's problem right now is that it's not stable enough. Language changes still happen, so it's the wrong time to try.
From a safety perspective there isn't a huge benefit to choosing Zig over C with the caveat, as others have pointed out, that you need to enable more tooling in C to get to a comparable level. You should be using -Wall and -fsanitize=address among others in your debug builds.
You do get some creature comforts like slices (fat pointers) and defer (goto replacement). But you also get forced to write a lot of explicit conversions (I personally think this is a good thing).
The C interop is good but the compiler is doing a lot of work under the hood for you to make it happen. And if you export Zig code to C... well you're restricted by the ABI so you end up writing C-in-Zig which you may as well be writing C.
It might be an easier fit than Rust in terms of ergonomics for C developers, no doubt there.
But I think long-term things like the borrow checker could still prove useful for kernel code. Currently you have to specify invariants like that in a separate language from C, if at all, and it's difficult to verify. Bringing that into a language whose compiler can check it for you is very powerful. I wouldn't discount it.
IMHO Zig doesn't bring enough value of its own to be worth bearing the cost of another language in the kernel.
Rust is different because it both:
- significantly improve the security of the kernel by removing the nastiest class of security vulnerabilities.
- And reduce cognitive burden for contributors by allowing to encode in thr typesystem the invariants that must be upheld.
That doesn't mean Zig is a bad language for a particular project, just that it's not worth adding to an already massive project like the Linux kernel. (Especially a project that already have two languages, C and now Rust).
Pardon my ignorance but I find the claim "removing the nastiest cla ss of security vulnerabilities" to be a bold claim. Is there ZERO use of "unsafe" rust in kernel code??
Aside from the minimal use of unsafe being heavily audited and the only entry point for those vulnerabilities, it allows for expressing kernel rules explicitly and structurally whereas at best there was a code comment somewhere on how to use the API correctly. This was true because there was discussion precisely about how to implement Rust wrappers for certain APIs because it was ambiguous how those APIs were intended to work.
So aside from being like 1-5% unsafe code vs 100% unsafe for C, it’s also more difficult to misuse existing abstractions than it was in the kernel (not to mention that in addition to memory safety you also get all sorts of thread safety protections).
In essence it’s about an order of magnitude fewer defects of the kind that are particularly exploitable (based on research in other projects like Android)
"Unsafe" rust still upholds more guarantees than C code. The rust compiler still enforces the borrow checker (including aliasing rules) and type system.
Not zero, but Rust-based kernels (see redox, hubris, asterinas, or blog_os) have demonstrated that you only need a small fraction of unsafe code to make a kernel (3-10%) and it's also the least likely places to make a memory-related error in a C-based kernel in the first place (you're more likely to make a memory-related error when working on the implementation of an otherwise challenging algorithm that has nothing to do with memory management itself, than you are when you are explicitly focused on the memory-management part).
So while there could definitely be an exploitable memory bug in the unsafe part of the kernel, expect those to be at least two orders of magnitude less frequent than with C (as an anecdotal evidence, the Android team found memory defects to be between 3 and 4 orders of magnitude less in practice over the past few years).
It removes a class of security vulnerabilities, modulo any unsound unsafe (in compiler, std/core and added dependency).
In practice you see several orders of magnitude fewer segfaults (like in Google Android CVE). You can compare Deno and Bun issue trackers for segfaults to see it in action.
As mentioned a billion times, seatbelts don't prevent death, but they do reduce the likelihood of dying in a traffic accident. Unsafe isn't a magic bullet, but it's a decent caliber round.
Zig as a language is not worth, but as a build system it's amazing. I wouldn't be surprised if Zig gets in just because of the much better build system than C ever had (you can cross compile not only across OS, but also across architecture and C stlib versions, including musl). And with that comes the testing system and seamless interop with C, which make it really easy to start writing some auxiliary code in Zig... and eventually it may just be accepted for any development.
I agree with you that it's much more interesting than the language, but I don't think it matters for a project like the Kernel that already had its build system sorted out. (Especially since no matter how nice and convenient Zig makes cross-compilation if you start a project from scratch, even in Rust thanks to cargo-zigbuild, it would require a lot of efforts to migrate the Linux build system to Zig, only to realize it doesn't support all the needs of the kernel at the start).
GNU/Linux isn't the only system out there, POSIX last update was in 2024 (when C requirement got upgraded to C17), and yes some parts of it are outdated in modern platforms, especially Apple, Google, Microsoft OSes and Cloud infrastructure.
Still, who supports POSIX is expected to have a C compiler available if applying for the OpenGroup stamp.
I can’t think of many real world production systems which don’t have a rust target. Also I’m hopeful the GCC backend for rustc makes some progress and can become an option for the more esoteric ones
There aren't really any "systems programming" platforms anywhere near production that doesn't have a workable rust target.
It's "embedded programming" where you often start to run into weird platforms (or sub-platforms) that only have a c compiler, or the rust compiler that does exist is somewhat borderline. We are sometimes talking about devices which don't even have a gcc port (or the port is based on a very old version of gcc).
Which is a shame, because IMO, rust actually excels as an embedded programming language.
Linux is a bit marginal, as it crosses the boundary and is often used as a kernel for embedded devices (especially ones that need to do networking). The 68k people have been hit quite hard by this, linux on 68k is still a semi-common usecase, and while there is a prototype rust back end, it's still not production ready.
There’s also I believe an effort to target C as the mid level although I don’t know the state / how well it’ll work in an embedded space anyway where performance really matters and these compilers have super old optimizers that haven’t been updated in 3 decades.
It's mostly embedded / microcontroller stuff. Things that you would use something like SDCC or a vendor toolchain for. Things like the 8051, stm8, PIC or oddball things like the 4 cent Padauk micros everyone was raving about a few years ago. 8051 especially still seems to come up from time to time in things like the ch554 usb controller, or some NRF 2.4ghz wireless chips.
Those don’t really support C in any real stretch, talking about general experience with microcontrollers and closed vendor toolchains; it’s a frozen dialect of C from decades ago which isn’t what people think of when they say C (usually people mean at least the 26 year old C99 standard but these often at best support C89 or even come with their own limitations)
It’s still C though, and rust is not an option. What else would you call it? Lots of c libraries for embedded target C89 syntax for exactly these reasons. Also for what it’s worth, SDCC seems to support very modern versions of C (up to C23), so I also don’t think that critique is very valid for the 8051 or stm8. I would argue that c was built with targets like this in mind and it is where many of its quirks that seem so anachronistic today come from (for example int being different sizes on different targets)
I’m sure if Rust becomes more popular in the game dev community, game consoles support will be a solved problem since these consoles are generally just running stock PC architectures with a normal OS and there’s probably almost nothing wrong with the stock toolchain in terms of generating working binaries: PlayStation and Switch are FreeBSD (x86 vs ARM respectively) and Xbox is x86 Windows, all of which are supported platforms.
Being supported on a games console means that you can produce binaries that are able to go through the whole approval process, and there is day 1 support for anything that is provided by the console vendor.
Otherwise is yak shaving instead of working on the actual game code.
Some people like to do that, that is how new languages like Rust get adoption, however they are usually not the majority, hence why mainstream adoption without backing from killer projects or companies support is so hard, and very few make it.
Also you will seldom see anyone throw away the confort of Unreal, Unity or even Godot tooling, to use Rust instead, unless they are more focused on proving the point the game can be made in Rust, than the actual game design experience.
Good luck shipping arbitrary binaries to this target. The most productive way for an indie to ship to Nintendo in 2025 is to create the game Unity and build via the special Nintendo version of the toolchain.
How long do we think it would take to fully penetrate all of these pipeline stages with rust? Particularly Nintendo, who famously adopts the latest technology trends on day 1. Do we think it's even worthwhile to create, locate, awaken and then fight this dragon? C# with incremental GC seems to be more than sufficient for a vast majority of titles today.
One problem of Rust vs C, I rarely see mentioned is that Rust code is hard to work with from other languages. If someone writes a popular C library, you can relatively easily use it in your Java, D, Nim, Rust, Go, Python, Julia, C#, Ruby, ... program.
If someone writes a popular Rust library, it's only going to be useful to Rust projects.
With Rust I don’t even know if it’s possible to make borrow checking work across a language boundary. And Rust doesn't have a stable ABI, so even if you make a DLL, it’ll only work if compiled with the exact same compiler version. *sigh*
That isn't always the case. Slow compilations are usually because of procedural macros and/or heavy use of generics. And even then compile times are often comparable to languages like typescript and scala.
Compared to C, rust compiles much slower. This might not matter on performant systems, but when resources are constrained you definitely notice it.
And if the whole world is rewritten in Rust, this will have a non-significant impact on the total build time of a bunch of projects.
rustc can use a few different backends. By my understanding, the LLVM backend is fully supported, the Cranelift backend is either fully supported or nearly so, and there's a GCC backend in the works. In addition, there's a separate project to create an independent Rust frontend as part of GCC.
Even then, there are still some systems that will support C but won't support Rust any time soon. Systems with old compilers/compiler forks, systems with unusual data types which violate Rust's assumptions (like 8 bit bytes IIRC)
Many organizations and environments will not switch themselves to LLVM to hamfist compiled Rust code. Nor is the fact of LLVM supporting something in principle means that it's installed on the relevant OS distribution.
Using LLVM somewhere in the build doesn't require that you compile everything with LLVM. It generates object files, just like GCC, and you can link together object files compiled with each compiler, as long as they don't use compiler-specific runtime libraries (like the C++ standard library, or a polyfill compiler-rt library).
If you're developing, you generally have control over the development environment (+/-) and you can install things. Plus that already reduces the audience: set of people with oddball hardware (as someone here put it) intersected with the set of people with locked down development environments.
Let alone the fact that conceptually people with locked down environments are precisely those would really want the extra safety offered by Rust.
I know that real life is messy but if we don't keep pressing, nothing improves.
> If you're developing, you generally have control over the development environment
If you're developing something individually, then sure, you have a lot of control. When you're developing as part of an organization or a company, you typically don't. And if there's non-COTS hardware involved, you are even more likely not to have control.
> Every system under the Sun has a C compiler... My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
This is still the worst possible argument for C. If C persists in places no one uses, then who cares?
I'm curious as to which other metric you'd use to define successful? If it actual usage, C still wins. Number of new lines pushed into production each year, or new project started, C is still high up the list, would be my guess.
Languages like Rust a probably more successful in terms of age vs. adoption speed. There's just a good number of platforms which aren't even supported, and where you have no other choice than C. Rust can't target most platforms, and it compiles on even less. Unless Linux want's to drop support for a good number of platforms, Rust adoption can only go so far.
> it always has been and always will be available everywhere
"Always has been" is pushing it. Half of C's history is written with outdated, proprietary compilers that died alongside their architecture. It's easy to take modern tech like LLVM for granted.
This might actually be a solved problem soonish, LLMs are unreasonably effective at writing compilers and C is designed to be easy to write a compiler for, which also helps. I don’t know if anyone tried, but there’s been related work posted here on HN recently: a revived Java compiler and the N64 decompilation project. Mashed together you can almost expect to be able to generate C compilers for obscure architectures on demand given just some docs and binary firmware dumps.
You almost certainly have a bunch of devices containing a microcontroller that runs an architecture not targeted by LLVM. The embedded space is still incredibly fragmented.
That said, only a handful of those architectures are actually so weird that they would be hard to write a LLVM backend for. I understand why the project hasn’t established a stable backend plugin API, but it would help support these ancillary architectures that nobody wants to have to actively maintain as part of the LLVM project. Right now, you usually need to use a fork of the whole LLVM project when using experimental backends.
> You almost certainly have a bunch of devices containing a microcontroller that runs an architecture not targeted by LLVM.
This is exactly what I'm saying. Do you think HW drives SW or the other way around? When Rust is in the Linux kernel, my guess is it will be very hard to find new HW worth using, which doesn't have some Rust support.
In embedded, HW drives SW much more than the other way around. Most microcontrollers are not capable of running a Linux kernel as it is, even with NoMMU. The ones that are capable of this are overwhelmingly ARM or RISC-V these days. There’s not a long list of architectures supported by the modern Linux kernel but not LLVM/Rust.
There are certain styles of programming and data structure implementations that end up requiring you to fight Rust at almost every step. Things like intrusive data structures, pointer manipulation and so on. Famously there is an entire book online on how to write a performant linked list in idiomatic Rust - something that is considered straightforward in C.
For these cases you could always use Zig instead of C
Given Zig's approach to safety, you can get the same in C with static and runtime analysis tools, that many devs keep ignoring.
Already setting the proper defaults on a Makefile would get many people half way there, without changing to language yet to be 1.0, and no use-after-free story.
it is not straightforward in rust because the linked list is inherently tricky to implement correctly. Rust makes that very apparent (and, yeah, a bit too apparent).
I know, a linked list is not exactly super complex and rust makes that a bit tough. But the realisation one must have is this: building a linked list will break some key assumptions about memory safety, so trying to force that into rust is just not gonna make it.
Problem is I guess that for several of us, we have forgotten about memory safety and it's a bit painful to have that remembered to us by a compiler :-)
Can you elaborate, what key assumptions about memory safety linked lists break? Sure, double linked lists may have non-trivial ownership, but that doesn't compromise safety.
A container class that needs cooperation from the contained items, usually with special data fields. For example, a doubly linked list where the forward and back pointers are regular member variables of the contained items. Intrusive containers can avoid memory allocations (which can be a correctness issue in a kernel) and go well with C's lack of built-in container classes. They are somewhat common in C and very rare in C++ and Rust.
At least for a double linked list you can probably get pretty far in terms of performance in the non-intrusive case, if your compiler unboxes the contained item into your nodes? Or are there benefits left in intrusive data structures that this doesn't capture?
The main thing is that he object can be a member of various structures. It can be in big general queue and in priority queue for example. Once you find it and deal with it you can remove it from both without needing to search for it.
Same story for games where it can be in the list of all objects and in the list of objects that might be attacked. Once it's killed you can remove it from all lists without searching for it in every single one.
Storing the data in nodes doesn't work if the given structure may need to be in multiple linked lists, which iirc was a concern for the kernel?
And generally I'd imagine it's quite a weird form for data structures for which being in a linked list isn't a core aspect (no clue what specifically the kernel uses, but I could imagine situations where where objects aren't in any linked list for 99% of time, but must be able to be chained in even if there are 0 bytes of free RAM ("Error: cannot free memory because memory is full" is probably not a thing you'd ever want to see)).
Yeah, if you need a linked list (you probably don't) use that. If however you are one of the very small number of people who need fine-grained control over a tailored data-structure with internal cross-references or whatnot then you may find yourself in a world where Rust really does not believe that you know what you are doing and fights you every step of the way. If you actually do know what you are doing, then Zig is probably the best modern choice. The TigerBeetle people chose Zig for these reasons, various resources on the net explain their motivations.
The point with the linked list is that it is perfectly valid to use unsafe to design said ”tailored data structure with internal cross-reference or what not” library and then expose a safe interface.
If you’re having trouble designing a safe interface for your collection then that should be a signal that maybe what you are doing will result in UB when looked at the wrong way.
That is how all standard library collections in Rust works. They’ve just gone to the length of formally verifying parts of the code to ensure performance and safety.
>>That is how all standard library collections in Rust works
Yeah and that's what not going to work for high performance data structures because you need to embed hooks into the objects - not just put objects into a bigger collection object. Once you think in terms of a collection that contains things you have already lost that specific battle.
Another thing that doesn't work very well in Rust (from my understanding, I tried it very briefly) is using multiple memory allocators which is also needed in high performance code. Zig takes care to make it easy and explicit.
> If you’re having trouble designing a safe interface for your collection then that should be a signal that maybe what you are doing will result in UB when looked at the wrong way.
Rust is great, but there are some things that are safe (and you could prove them safe in the abstract), but that you can't easily express in Rust's type system.
More specifically, there are some some things and usage pattern of these things that are safe when taken together. But the library can't force the safe usage pattern on the client, with the tools that Rust provides.
If you can't create a safe interface and must have the function then create an unsafe function and clearly document the invariants and then rely on the user to uphold them?
Take a look at the unsafe functions for the standard library Vec type to see examples of this:
Before we ask if almost all things old will be rewritten in Rust, we should ask if almost all things new are being written in Rust or other memory-safe languages?
Obviously not. When will that happen? 15 years? Maybe it's generational: How long before developers 'born' into to memory-safe languages as serious choices will be substantially in charge of software development?
I don't know I tend to either come across new tools written in Rust, JavaScript or Python but relatively low amount of C. The times I see some "cargo install xyz" in a git repo of some new tool is definitely noticeable.
I'm a bit wary if this is hiding an agist sentiment, though. I doubt most Rust developers were 'born into' the language, but instead adopted it on top of existing experience in other languages.
People can learn Rust at any age. The reality is that experienced people often are more hesitant to learn new things.
I can think of possible reasons: Early in life, in school and early career, much of what you work on is inevitably new to you, and also authorities (professor, boss) compel you to learn whatever they choose. You become accustomed to and skilled at adapting new things. Later, when you have power to make the choice, you are less likely to make yourself change (and more likely to make the junior people change, when there's a trade-off). Power corrupts, even on that small scale.
There's also a good argument for being stubborn and jaded: You have 30 years perfecting the skills, tools, efficiencies, etc. of C++. For the new project, even if C++ isn't as good a fit as Rust, are you going to be more efficient using Rust? How about in a year? Two years? ... It might not be worth learning Rust at all; ROI might be higher continuing to invest in additional elite C++ skills. Certainly that has more appeal to someone who knows C++ intimately - continue to refine this beautiful machine, or bang your head against the wall?
For someone without that investment, Rust might have higher ROI; that's fine, let them learn it. We still need C++ developers. Morbid but true, to a degree: 'Progress happens one funeral at a time.'
> experienced people often are more hesitant to learn new things
I believe the opposite. There's some kind of weird mentality in beginner/wannabe programmers (and HR, but that's unrelated) that when you pick language X then you're an X programmer for life.
Experienced people know that if you need a new language or library, you pick up a new language or library. Once you've learned a few, most of them aren't going to be very different and programming is programming. Of course it will look like work and maybe "experienced" people will be more work averse and less enthusiastic than "inexperienced" (meaning younger) people.
>Experienced people know that if you need a new language or library, you pick up a new language or library.
That heavily depends, if you tap into a green field project, yes. Or free reign over a complete rewrite of existing projects. But these things are more the exception than the regular case.
Even on green field project, ecosystem and available talents per framework will be a consideration most of the time.
There are also other things like being parent and wanting to take care of them that can come into consideration later in life. So more like more responsibilities constraints perspectives and choices than power corrupts in purely egoistic fashion.
I still think you're off the mark. Again, most existing Rust developers are not "blank slate Rust developers". That they do not rush out to rewrite all of their past projects in C++ may be more about sunk costs, and wanting to solve new problems with from-scratch development.
> most existing Rust developers are not "blank slate Rust developers"
Not most, but the pool of software devs has been doubling every five years, and Rust matches C# on "Learning to Code" voters at Stack Overflow's last survey, which is crazy considering how many people learn C# just to use Unity. I think you underestimate how many developers are Rust blank slates.
Anecdotically, I've recently come across comments from people who've taught themselves Rust but not C or C++.
> Which programming, scripting, and markup languages have you done extensive development work in over the past year, and which do you want to work in over the next year? (If you both worked with the language and want to continue to do so, please check both boxes in that row.)
It describes two check marks, yet only has a single statistic for each language. Did StackOverflow mess up the question or data?
The data also looks suspicious. Is it really the case that 44.6% of developers learning to code have worked with or want to work with C++?
Oh I agree the survey has issues, I was just thinking about how each year the stats get more questionable! I just think it shows that interest in Rust doesn't come only from people with a C++ codebase to rewrite. Half of all devs have got less than five years of experience with any toolchain at all, let alone C++, yet many want to give Rust a try. I do think there will be a generational split there.
> Steve Klabnik?
Thankfully no. I've actually argued with him a couple times. Usually in partial agreement, but his fans will downvote anyone who mildly disagrees with him.
Also, I'm not even big on Rust: every single time I've tried to use it I instinctively reached for features that turned out to be unstable, and I don't want to deal with their churn, so I consider the language still immature.
Okay I had upvoted you but now you're just being an asshole. Predictable from someone on multiple fucking throwaways created just to answer on a single post and crap on a piece of tech I suppose; I don't even care much about Rust. And I'm sorry to inform you I'm not Klabnik, but delusions are free: Maybe you think everyone using -nik is actually the same person and you've uncovered a conspiracy. Congrats on that.
I'd shove you a better data point but people aren't taking enough surveys for our sake, that's the one we've got. Unless you want to go with Jetbrains', which, spoilers, skews towards technologies supported by Jetbrains; I'm not aware of other major ones.
This behavior is weird. Your parent writes nothing like me.
The only alt I’ve made on hacker news is steveklabnik1, or whatever I called it, because I had locked myself out of this account. pg let me back in and so I stopped using it.
According to the strange data at https://survey.stackoverflow.co/2025/technology#most-popular... , 44.6% have responded positively to that question regarding C++. But there may be some issues, for the question involves two check boxes, yet there is only one statistic.
Sure there are plenty of them, hence why you seem remarks like wanting to use Rust but with a GC, or assigned to Rust features that most ML derived languages have.
New high-scale data infrastructure projects I am aware of mostly seem to be C++ (often C++20). A bit of Rust, which I’ve used, and Zig but most of the hardcore stuff is still done in C++ and will be for the foreseeable future.
It is easy to forget that the state-of-the-art implementations of a lot of systems software is not open source. They don’t struggle to attract contributors because of language choices, being on the bleeding edge of computer science is selling point enough.
There's a "point of no return" when you start to struggle to hire anyone on your teams because no one knows the language and no one is willing to learn. But C++ is very far from it.
There's always someone willing to write COBOL for the right premium.
I'm working on Rust projects, so I may have incomplete picture, but I'm from what I see when devs have a choice, they prefer working with Rust over C++ (if not due to the language, at least due to the build tooling).
Writing C++ is easier than writing Rust. But writing safe multithreaded code in C++?
I don't want to write multithreaded C++ at all unless I explicitly want a new hole in my foot. Rust I barely have any experience with, but it might be less frustrating than that.
Game development, graphics and VFX industry, AI tooling infrastructure, embedded development, Maker tools like Arduino and ESP32, compiler development.
> ... has a debug allocator that maintains memory safety in the face of use-after-free and double-free
which is probably true (in that it's not possible to violate memory safety on the debug allocator, although it's still a strong claim). But beyond that there isn't really any current marketing for Zig claiming safety, beyond a heading in an overview of "Performance and Safety: Choose Two".
It is intended for release builds. The ReleaseSafe target will keep the checks. ReleaseFast and ReleaseSmall will remove the checks, but those aren't the recommended release modes for general software. Only for when performance or size are critical.
It’s okay to enjoy driving an outdated and dangerous car for the thrill because it makes pleasing noise, as long as you don’t annoy too much other people with it.
If it wasn't clear, I have to debug Rust code waaaay less than C, for two reasons:
1. Memory safety - these can be some of the worst bugs to debug in C because they often break sane invariants that you use for debugging. Often they break the debugger entirely! A classic example is forgetting to return a value from a non-void function. That can trash your stack and end up causing all sorts of impossible behaviours in totally different parts of the code. Not fun to debug!
2. Stronger type system - you get an "if it compiles it works" kind of experience (as in Haskell). Obviously that isn't always the case, but I can sometimes write several hundred lines of Rust and once it's compiling it works first time. I've had to suppress my natural "oh I must have forgotten to save everything or maybe incremental builds are broken or something" instinct when this happens.
Net result is that I spend at least 10x less time in a debugger with Rust than I do with C.
Didn't save Cloudflare. Memory safety is necessary, and memory safety guard rails can be helpful depending on what the guard rails cost in different ways, but memory safety is in no way, shape or form sufficient.
It’s available on more obscure platforms than Rust, and more people are familiar with it.
I wouldn’t say it’s inevitable that everything will be rewritten in Rust, at the very least this will this decades. C has been with us for more than half a century and is the foundation of pretty much everything, it will take a long time to migrate all that.
More likely is that they will live next to each other for a very, very long time.
Apple handled this problem by adding memory safety to C (Firebloom). It seems unlikely they would throw away that investment and move to Rust. I’m sure lots of other companies don’t want to throw away their existing code, and when they write new code there will always be a desire to draw on prior art.
That's a rather pessimistic take compared to what's actually happening. What you say should apply to the big players like Amazon, Google, Microsoft, etc the most, because they arguably have massive C codebases. Yet, they're also some of the most enthusiastic adopters and promoters of Rust. A lot of other adopters also have legacy C codebases.
I'm not trying to hype up Rust or disparage C. I learned C first and then Rust, even before Rust 1.0 was released. And I have an idea why Rust finds acceptance, which is also what some of these companies have officially mentioned.
C is a nice little language that's easy to learn and understand. But the price you pay for it is in large applications where you have to handle resources like heap allocations. C doesn't offer any help there when you make such mistakes, though some linters might catch them. The reason for this, I think, is that C was developed in an era when they didn't have so much computing power to do such complicated analysis in the compiler.
People have been writing C for ages, but let me tell you - writing correct C is a whole different skill that's hard and takes ages to learn. If you think I'm saying this because I'm a bad programmer, then you would be wrong. I'm not a programmer at all (by qualification), but rather a hardware engineer who is more comfortable with assembly, registers, Bus, DRAM, DMA, etc. I still used to get widespread memory errors, because all it takes is a lapse in attention while coding. That strain is what Rust alleviates.
So you try to say c is for good programmers only and rust let also the idiots Programm? I think that’s the wrong way to argue for rust. Rust catches one kind of common problem but but does not magically make logic errors away.
> It seems unlikely [Apple] would throw away that investment and move to Rust.
Apple has invested in Swift, another high level language with safety guarantees, which happens to have been created under Chris Lattner, otherwise known for creating LLVM. Swift's huge advantage over Rust, for application and system programming is that it supports an ABI [1] which Rust, famously, does not (other than falling back to a C ABI, which degrades its promises).
[1] for more on that topic, I recommend this excellent article: https://faultlore.com/blah/swift-abi/ Side note, the author of that article wrote Rust's std::collections API.
Swift does not seem suitable for OS development, at least not as much as C or C++.[0] Swift handles by default a lot of memory by using reference counting, as I understand it, which is not always suitable for OS development.
[0]: Rust, while no longer officially experimental in the Linux kernel, does not yet have major OSs written purely in it.
But at least a lot of tasks in a kernel would require something else than reference counting, unless it can be guaranteed that the reference counting is optimized away or something, right?
The practical reality is arguably more important than beliefs. Apple has, as it turns out, invested in trying to make Swift more suitable for kernel and similar development, like trying to automate away reference counting when possible, and also offering Embedded Swift[0], an experimental subset of Swift with significant restrictions on what is allowed in the language. Maybe Embedded Swift will be great in the future, and it is true that Apple investing into that is significant, but it doesn't seem like it's there.
> Embedded Swift support is available in the Swift development snapshots.
And considering Apple made Embedded Swift, even Apple does not believe that regular Swift is suitable. Meaning that you're undeniably completely wrong.
You show a lack of awareness that ISO C and C++ are also not applicable, because on those domains the full ISO language standard isn't available, which is why freestanding is a thing.
Is this even a fair question? A common response to pointing out that Oberon and the other Wirth languages where used to write several OS’s (using full GC in some cases) is that they don’t count, just like Minix doesn’t count for proof of microkernels. The prime objection being they are not large commercial OS’s. So, if the only two examples allowed are Linux and Windows (and maybe MacOS) then ‘no’ there are no wide spread, production sized OS’s that use GC or reference counting.
The big sticking point for me is that for desktop and server style computing, the hardware capabilities have increased so much that a good GC would for most users be acceptable at the kernel level. The other side to that coin is that then OS’s would need to be made on different kernels for large embedded/tablet/low-power/smart phone use cases. I think tech development has benefitted from Linux being used at so many levels.
A push to develop a new breed of OS, with a microkernel and using some sort of ‘safe’ language should be on the table for developers. But outside of proprietary military/finance/industrial (and a lot of the work in these fields are just using Linux) areas there doesn’t seem to be any movement toward movement toward a less monolithic OS situation.
Rust still compiles into bigger binary sizes than C, by a small amount. Although it’s such a complex thing depending on your code that it really depends case-by-case, and you can get pretty close. On embedded systems with small amounts of ram (think on the order of 64kbytes), a few extra kb still hurts a lot.
It's a bit like asking if there is any significant advantage to ICE motors over electric motors. They both have advantages and disadvantages. Every person who uses one or the other, will tell you about their own use case, and why nobody could possibly need to use the alternative.
There's already applications out there for the "old thing" that need to be maintained, and they're way too old for anyone to bother with re-creating it with the "new thing". And the "old thing" has some advantages the "new thing" doesn't have. So some very specific applications will keep using the "old thing". Other applications will use the "new thing" as it is convenient.
To answer your second question, nothing is inevitable, except death, taxes, and the obsolescence of machines. Rust is the new kid on the block now, but in 20 years, everybody will be rewriting all the Rust software in something else (if we even have source code in the future; anyone you know read machine code or punch cards?). C'est la vie.
I think it'll be less like telegram lines- which were replaced fully for a major upgrade in functionality, and more like rail lines- which were standardized and ubiquitous, still hold some benefit but mainly only exist in areas people don't venture nearly as much.
Shitloads of already existing libraries. For example I'm not going to start using it for Arduino-y things until all the peripherals I want have drivers written in Rust.
Rust has a nice compiler-provided static analyzer using annotation to do life time analysis and borrow checking. I believe borrow checking to be a local optima trap when it comes to static analysis and finds it often more annoying to use than I would like but I won't argue it's miles better than nothing.
C has better static analysers available. They can use annotations too. Still, all of that is optional when it's part of Rust core language. You know that Rust code has been successfully analysed. Most C code won't give you this. But if it's your code base, you can reach this point in C too.
C also has a fully proven and certified compiler. That might be a requirement if you work in some niche safety critical applications. Rust has no equivalent there.
The discussion is more interesting when you look at other languages. Ada/Spark is for me ahead of Rust both in term of features and user experience regarding the core language.
Rust currently still have what I consider significant flaws: a very slow compiler, no standard, support for a limited number of architectures (it's growing), it's less stable that I consider it should be given its age, and most Rust code tends to use more small libraries than I would personaly like.
Rust is very trendy however and I don't mean that in a bad way. That gives it a lot of leverage. I doubt we will ever see Ada in the kernel but here we are with Rust.
you can look at rust sources of real system programs like framekernel or such things. uefi-rs etc.
there u can likely explore well the boundaries where rust does and does not work.
people have all kind of opinions. mine is this:
if you need unsafe peppered around, the only thing rust offers is being very unergonomic. its hard to write and hard to debug for no reason. Writing memory-safe C code is easy. The problems rust solves arent bad, just solved in a way thats way more complicated than writing same (safe) C code.
a language is not unsafe. you can write perfectly shit code in rust. and you can write perfectly safe code in C.
people need to stop calling a language safe and then reimplementing other peoples hard work in a bad way creating whole new vulnerabilities.
I disagree. Rust shines when you need perform "unsafe" operations. It forces programmers to be explicit and isolate their use of unsafe memory operations. This makes it significantly more feasible to keep track of invariants.
It is completely besides the point that you can also write "shit code" in Rust. Just because you are fed up with the "reimplement the world in Rust" culture does not mean that the tool itself is bad.
For my hobby code, I'm not going to start writing Rust anytime soon. My code is safe enough and I like C as it is. I don't write software for martian rovers, and for ordinary tasks, C is more ergonomic than Rust, especially for embedded tasks.
For my work code, it all comes down to SDKs and stuff. For example I'm going to write firmware for Nordic ARM chip. Nordic SDK uses C, so I'm not going to jump through infinite number of hoops and incomplete rust ports, I'll just use official SDK and C. If it would be the opposite, I would be using Rust, but I don't think that would happen in the next 10 years.
Just like C++ never killed C, despite being perfect replacement for it, I don't believe that Rust would kill C, or C++, because it's even less convenient replacement. It'll dilute the market, for sure.
A lot of C's popularity is with how standard and simple it is. I doubt Rust will be the safe language of the future, simply because of its complexity. The true future of "safe" software is already here, JavaScript.
There will be small niches leftover:
* Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
* OS / Kernel - Nearly all of the relevant code is unsafe. There aren't many real benefits. It will happen anyways due to grant funding requirements. This will take decades, maybe a century. A better alternative would be a verified kernel with formal methods and a Linux compatibility layer, but that is pie in the sky.
* Game Engines - Rust screwed up its standard library by not putting custom allocation at the center of it. Until we get a Rust version of the EASTL, adoption will be slow at best.
* High Frequency Traders - They would care about the standard library except they are moving on from C++ to VHDL for their time-sensitive stuff. I would bet they move to a garbage-collected language for everything else, either Java or Go.
* Browsers - Despite being born in a browser, Rust is unlikely to make any inroads. Mozilla lost their ability to make effective change and already killed their Rust project once. Google has probably the largest C++ codebase in the world. Migrating to Rust would be so expensive that the board would squash it.
* High-Throughput Services - This is where I see the bulk of Rust adoption. I would be surprised if major rewrites aren't already underway.
This isn't really true; otherwise, there would be no reason for no_std to exist. Data race safety is independent of whether you allocate or not, lifetimes can be handy even for fixed-size arenas, you still get bounds checks, you still get other niceties like sum types/an expressive type system, etc.
> OS / Kernel - Nearly all of the relevant code is unsafe.
I think that characterization is rather exaggerated. IIRC the proportion of unsafe code in Redox OS is somewhere around 10%, and Steve Klabnik said that Oxide's Hubris has a similarly small proportion of unsafe code (~3% as of a year or two ago) [0]
> Browsers - Despite being born in a browser, Rust is unlikely to make any inroads.
Technically speaking, Rust already has. There has been Rust in Firefox for quite a while now, and Chromium has started allowing Rust for third-party components.
> Google has probably the largest C++ codebase in the world. Migrating to Rust would be so expensive that the board would squash it.
Google is transitioning large parts of Android to Rust and there is now first-party code in Chromium and V8 in Rust. I’m sure they’ll continue to write new C++ code for a good while, but they’ve made substantial investments to enable using Rust in these projects going forward.
Also, if you’re imagining the board of a multi-trillion dollar market cap company is making direct decisions about what languages get used, you may want to check what else in this list you are imagining.
Unless rules have changed Rust is only allowed a minor role in Chrome.
> Based on our research, we landed on two outcomes for Chromium.
> We will support interop in only a single direction, from C++ to Rust, for now. Chromium is written in C++, and the majority of stack frames are in C++ code, right from main() until exit(), which is why we chose this direction. By limiting interop to a single direction, we control the shape of the dependency tree. Rust can not depend on C++ so it cannot know about C++ types and functions, except through dependency injection. In this way, Rust can not land in arbitrary C++ code, only in functions passed through the API from C++.
> We will only support third-party libraries for now. Third-party libraries are written as standalone components, they don’t hold implicit knowledge about the implementation of Chromium. This means they have APIs that are simpler and focused on their single task. Or, put another way, they typically have a narrow interface, without complex pointer graphs and shared ownership. We will be reviewing libraries that we bring in for C++ use to ensure they fit this expectation.
Also even though Rust would be a safer alternative to using C and C++ on the Android NDK, that isn't part of the roadmap, nor the Android team provides any support to those that go down that route. They only see Rust for Android internals, not app developers
If anything, they seem more likely to eventually support Kotlin Native for such cases than Rust.
The V8 developers seem more excited to use it going forward, and I’m interested to see how it turns out. A big open question about the Rust safety model for me is how useful it is for reducing the kind of bugs you see in a SOTA JIT runtime.
> They only see Rust for Android internals, not app developers
I’m sure, if only because ~nobody actually wants to write Android apps in Rust. Which I think is a rational choice for the vast majority of apps - Android NDK was already an ugly duckling, and a Rust version would be taken even less seriously by the platform.
That looks only to be the guidelines on how to integrate Rust projects, not that the policy has changed.
The NDK officially it isn't to write apps anyway, people try to do that due to some not wanting to touch Java or Kotlin, but that has never been the official point of view from Android team since it was introduced in version 2, rather:
> The Native Development Kit (NDK) is a set of tools that allows you to use C and C++ code with Android, and provides platform libraries you can use to manage native activities and access physical device components, such as sensors and touch input. The NDK may not be appropriate for most novice Android programmers who need to use only Java code and framework APIs to develop their apps. However, the NDK can be useful for cases in which you need to do one or more of the following:
> Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.
> Reuse your own or other developers' C or C++ libraries.
So it would be expected that at least for the first scenario, Rust would be a welcomed addition, however as mentioned they seem to be more keen into supporting Kotlin Native for it.
There are memory safety issues that literally only apply to memory on the stack, like returning dangling pointers to local variables. Not touching the heap doesn't magically avoid all of the potential issues in C.
> Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
Embedded Dev here and I can report that Rust has been more and more of a topic for me. I'm actually using it over C or C++ in a bare metal application. And I don't get where the no allocation -> no benefit thing comes from, Rust gives you much more to work with on bare metal than C or C++ do.
Rust is already making substantial inroads in browsers, especially for things like codecs. Chrome also recently replaced FreeType with Skrifa (Rust), and the JS Temporal API in V8 is implemented in Rust.
> Rust is also too complex for smaller systems to write compilers.
I am not a compiler engineer, but I want to tease apart this statement. As I understand, the main Rust compiler uses LLVM framework which uses an intermediate language that is somewhat like platform independent assembly code. As long as you can write a lexer/parser to generate the intermediate language, there will be a separate backend to generate machine code from the intermediate language. In my (non-compiler-engineer) mind, separates the concern of front-end language (Rust) from target platform (embedded). Do you agree? Or do I misunderstand?
Memory safety applies to all memory. Not just heap allocated memory.
This is a strange claim because it's so obviously false. Was this comment supposed to be satire and I just missed it?
Anyway, Rust has benefits beyond memory safety.
> Rust is also too complex for smaller systems to write compilers.
Rust uses LLVM as the compiler backend.
There are already a lot of embedded targets for Rust and a growing number of libraries. Some vendors have started adopting it with first-class support. Again, it's weird to make this claim.
> Nearly all of the relevant code is unsafe. There aren't many real benefits.
Unsafe sections do not make the entire usage of Rust unsafe. That's a common misconception from people who don't know much about Rust, but it's not like the unsafe keyword instantly obliterates any Rust advantages, or even all of its safety guarantees.
It's also strange to see this claim under an article about the kernel developers choosing to move forward with Rust.
> High Frequency Traders - They would care about the standard library except they are moving on from C++ to VHDL for their time-sensitive stuff. I would bet they move to a garbage-collected language for everything else, either Java or Go.
C++ and VHDL aren't interchangeable. They serve different purposes for different layers of the system. They aren't moving everything to FPGAs.
Betting on a garbage collected language is strange. Tail latencies matter a lot.
This entire comment is so weird and misinformed that I had to re-read it to make sure it wasn't satire or something.
> Memory safety applies to all memory. Not just heap allocated memory.
> Anyway, Rust has benefits beyond memory safety.
I want to elaborate on this a little bit. Rust uses some basic patterns to ensure memory safety. They are 1. RAII, 2. the move semantics, and 3. the borrow validation semantics.
This combination however, is useful for compile-time-verified management of any 'resource', not just heap memory. Think of 'resources' as something unique and useful, that you acquire when you need it and release/free it when you're done.
For regular applications, it can be heap memory allocations, file handles, sockets, resource locks, remote session objects, TCP connections, etc. For OS and embedded systems, that could be a device buffer, bus ownership, config objects, etc.
> > Nearly all of the relevant code is unsafe. There aren't many real benefits.
Yes. This part is a bit weird. The fundamental idea of 'unsafe' is to limit the scope of unsafe operations to as few lines as possible (The same concept can be expressed in different ways. So don't get offended if it doesn't match what you've heard earlier exactly.) Parts that go inside these unsafe blocks are surprisingly small in practice. An entire kernel isn't all unsafe by any measure.
> Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
Modern embedded isn't your grandpa's embedded anymore. Modern embedded chips have multiple KiB of ram, some even above 1MiB and have been like that for almost a decade (look at ESP32 for instance). I once worked on embedded projects based on ESP32 that used full C++, with allocators, exceptions, ... using SPI RAM and worked great. There's a fantastic port of ESP-IDF on Rust that Espressif themselves is maintaining nowadays, too.
> same Mike in an org wide email: thank you all for your support. Starting next week I will no longer be a developer here. I thank my manager blah blah... I will be starting my dream role as Architect and I hope to achieve success.
> Mike's colleagues: Aww.. We'll miss you.
> Mike's manager: Is this your one week's notice? Did you take up an architect job elsewhere immediately after I promoted you to architect ?!
Kind of tells you something about how timidly and/or begrudgingly it’s been accepted.
IMO the attitude is warranted. There is no good that comes from having higher-level code than necessary at the kernel level. The dispute is whether the kernel needs to be more modern, but it should be about what is the best tool for the job. Forget the bells-and-whistles and answer this: does the use of Rust generate a result that is more performant and more efficient than the best result using C?
This isn’t about what people want to use because it’s a nice language for writing applications. The kernel is about making things work with minimum overhead.
By analogy, the Linux kernel historically has been a small shop mentored by a fine woodworker. Microsoft historically has been a corporation with a get-it-done attitude. Now we’re saying Linux should be a “let’s see what the group thinks- no they don’t like your old ways, and you don’t have the energy anymore to manage this” shop, which is sad, but that is the story everywhere now. This isn’t some 20th century revolution where hippies eating apples and doing drugs are creating video games and graphical operating systems, it’s just abandoning old ways because they don’t like them and think the new ways are good enough and are easier to manage and invite more people in than the old ways. That’s Microsoft creep.
This title is moderately clickbait-y and comes with a subtle implication that Rust might be getting removed from the kernel. IMO it should be changed to "Rust in the kernel is no longer experimental"
I absolutely understand the sentiment, but LWN is a second-to-none publication that on this rare occasion couldn't resist the joke, and also largely plays to an audience who will immediately understand that it's tongue-in-cheek.
Speaking as a subscriber of about two decades who perhaps wouldn't have a career without the enormous amount of high-quality education provided by LWN content, or at least a far lesser one: Let's forgive.
> Ouch. That is what I get for pushing something out during a meeting, I guess. That was not my point; the experiment is done, and it was a success. I meant no more than that.
Nah I used to read Phoronix and the articles are a bit clickbaity sometimes but mostly it's fine. The real issue is the reader comments. They're absolute trash.
I think on HN, people waste too much time arguing about the phrasing of the headline, whether it is clickbait, etc. and not enough discussing the actual substance of the article.
This one is apparently a genuine mistake from the author. But I think we should leave it as it is. The confusion and the argument about it is interesting in itself.
It’s a bit clickbait-y, but the article is short, to the point, and frankly satisfying. If there is such a thing as good clickbait, then this might be it. Impressive work!
The topic of the Rust experiment was just discussed at the annual Maintainers Summit. The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the "experimental" tag will be coming off. Congratulations are in order for all of the Rust-for-Linux team.
Perhaps, except it can have the reverse effect. I was surprised, disappointed, and then almost moved on without clicking the link or the discussion. I'm glad I clicked. But good titles don't mislead! (To be fair, this one didn't mislead, but it was confusing at best.)
I'm not on the Rust bandwagon, but statements like this make absolutely no sense.
A lot of software was written in C and C++ because they were the only option for decades. If you couldn't afford garbage collection and needed direct control of the hardware there wasn't much of a choice. Had their been "safer" alternatives, it's possible those would have been used instead.
It's only been in the last few years we've seen languages emerge that could actually replace C/C++ with projects like Rust, Zig and Odin. I'm not saying they will, or they should, but just that we actually have alternatives now.
If you’re trying to demonstrate something about Rust by pointing out that someone chose C over Perl, I have to wonder how much you know about the positive characteristics of C. Let alone Rust.
Your comment comes across disingenuous to me.
Writing it in, for example, Java would have limited it to situations where you have the JVM available, which is a minuscule subset of the situations that curl is used in today, especially if we're not talking "curl, the CLI tool" but libcurl.
I have a feeling you know that already and mostly want to troll people.
And Golang is only 16 years old according to Wikipedia, by the way.
OS kernels? Everything from numpy to CUDA to NCCL is using C/C++ (doing all the behind the scene heavy lifting), never mind the classic systems software like web browsers, web servers, networking control plane (the list goes on).
Newer web servers have already moved away from C/C++.
Web browsers have been written in restricted subsets of C/C++ with significant additional tooling for decades at this point, and are already beginning to move to Rust.
For Chrome, I don't know if anyone has compiled the stats, but navigating from https://chromium.googlesource.com/chromium/src/+/refs/heads/... I see at least a bunch of vendored crates, so there's some use, which makes sense since in 2023 they announced that they would support it.
Not in the sense that people who are advocating writing new code in C/C++ generally mean. If someone is advocating following the same development process as Chrome does, then that's a defensible position. But if someone is advocating developing in C/C++ without any feature restrictions or additional tooling and arguing "it's fine because Chrome uses C/C++", no, it isn't.
I have. I personally really enjoy the recent crop of UI frameworks built for the web. Tools like Solidjs or Svelte.
Whatever your thoughts are about react, the JavaScript community that has been the tip of the spear for experimenting with new ways to program user interfaces. They’ve been at it for decades - ever since jQuery. The rest of the programming world are playing catchup. For example, SwiftUI.
VSCode is also a wonderful little IDE. Somehow much faster and more stable than XCode, despite being built on electron.
This made me quite curious, is there a list somewhere of what bad APIs have been removed/improved and/or technical debt that's been addressed? Or if not a list, some notable examples?
The Linux kernel is already very complex, and I expect that replacing much or all of it with Rust code will be good for making it more tractable to understand. Because you can represent complex states with more sophisticated types than in C, if nothing else.
I've been working on Rust bindings for a C SDK recently, and the Rust wrapper code was far more complex than the C code it wrapped. I ended up ceding and getting reasonable wrappers by limiting how it can be used, instead of moddeling the C API's full capabilities. There are certainly sound, reasonable models of memory ownership that are difficult or impossible to express with Rust's ownership model.
Sure, a different model that was easy to model would have been picked if we were initially using Rust, but we were not, and the existing model in C is what we need to wrap. Also, a more Rust-friendly model would have incured higher memory costs.
I totally hear you about C friendly APIs not making sense in rust. GTK struggles with this - I tried to make a native UI in rust a few months ago using gtk and it was horrible.
> Also, a more Rust-friendly model would have incured higher memory costs.
Can you give some details? This hasn’t been my experience.
I find in general rust’s ownership model pushes me toward designs where all my structs are in a strict tree, which is very efficient in memory since everything is packed in memory. In comparison, most C++ APIs I’ve used make a nest of objects with pointers going everywhere. And this style is less memory efficient, and less performant because of page faults.
C has access to a couple tricks safe rust is missing. But on the flip side, C’s lack of generics means lots of programs roll their own hash tables, array lists and various other tools. And they’re often either dynamically typed (and horribly inefficient as a result) or they do macro tricks with type parameters - and that’s ugly as sin. A lack of generics and monomorphization means C programs usually have slightly smaller binaries. But they often don’t run as fast. That’s often a bad trade on modern computers.
BSD development mostly works through the classic approach of pretending that if you write C code and then stare at it really really hard it'll be correct.
I'm not actually sure if they've gotten as far as having unit tests.
(OpenBSD adds the additional step of adding a zillion security mitigations that were revealed to them in a dream and don't actually do anything.)
They don't do anything yet they segfault some C turd from 'The computational beauty of Nature's.
You have no clue about any BSD and it looks. Every BSD has nothing to do which each other.
They had me in the first half of the article, not gonna lie. I thought they resigned because Rust was so complicated and unsuitable, but it's the opposite
Why do Rust developers create so much drama everywhere? "Rewrite everything to Rust", "Rust is better than X" in all threads related to other languages, "how do you know it's safe if you don't use Holy Glorious Rust".
I'm genuinely not trolling, and Rust is okay, but only Rust acolytes exhibit this behaviour.
I think they read the title but not the article and assumed Rust was being removed, and (rightfully, if that were true) opined that that was a shortsighted decision.
Rust in the kernel feels like a red herring. For fault tolerance and security, wouldn’t it be a superior solution to migrate Linux to a microkernel architecture? That way, drivers and various other components could be isolated in sandboxes.
I am not a system programmer but, from my understanding, Torvalds has expressed strong opinions about microkernels over a long period of time. The concept looks cleaner on paper but the complexity simply outweighs all the potential benefits. The debate, from what I have followed, expressed similar themes as monolithic vs microservices in the wider software development arena.
I'm not a kernel developer myself, but I’m aware of the Tanenbaum/Torvalds debates in the early 90’s. My understanding is the primary reason Linus gave Tanenbaum for the monolithic design was performance, but I would think in 2025 this isn’t so relevant anymore.
And thanks for attempting to answer my question without snark or down voting. Usually HN is much better for discussion than this.
>"My understanding is the primary reason Linus gave Tanenbaum for the monolithic design was performance, but I would think in 2025 this isn’t so relevant anymore."
I think that unlike user level software performance of a kernel is of utmost importance.
Microkernel architecture doesn't magically eliminate bugs, it just replaces a subset of kernel panics with app crashes. Bugs will be there, they will keep impacting users, they will need to be fixed.
I agree it doesn’t magically eliminate bugs, and I don’t think rearchitecting the existing Linux kernel would be a fruitful direction regardless. That said, OS services split out into apps with more limited access can still provide a meaningful security barrier in the event of a crash. As it stands, a full kernel-space RCE is game over for a Linux system.
This is great because it means someday (possibly soon) Linux development will slowly grind to a halt and become unmaintainable, so we can start from scratch and write a new kernel.
That is so good to hear. I feel Rust support came a long way in the past two years and you can do a functional Rust kernel module now with almost no boilerplate.
Removing the "experimental" tag is certainly a milestone to celebrate.
I'm looking forward to distros shipping a default kernel with Rust support enabled. That, to me, will be the real point of no return, where Rust is so prevalent that there will be no going back to a C only Linux.
A few distros already do that. Of the top of my head, both NixOS and Arch enable the QR code kernel panic screen, which is written in Rust. Granted, those are rather bleeding edge, but I know a few more traditional distros have that enabled (I _think_ fedora has it? But not sure).
> both NixOS and Arch enable the QR code kernel panic screen
Interesting, I use both (NixOS on servers, Arch for desktop) and never seen that. Seems you're referring to this: https://rust-for-linux.com/drm-panic-qr-code-generator (which looks like this: https://github.com/kdj0c/panic_report/issues/1)
Too bad NixOS and Arch is so stable I can't remember the last time any full system straight up panicked.
Yep, that's what I'm referring to.
For now there aren't many popular drivers that use Rust, but there are currently 3 in-development GPU drivers that use it, and I suspect that when those get merged that'll be the real point of no return:
- Asahi Linux's driver for Apple GPUs - https://rust-for-linux.com/apple-agx-gpu-driver
- The Nova GPU driver for NVIDIA GPUs - https://rust-for-linux.com/nova-gpu-driver
- The Tyr GPU driver for Arm Mali GPUs - https://rust-for-linux.com/tyr-gpu-driver
I suspect the first one of those to be actually used in production will be the Tyr driver, especially since Google's part of it and they'll probably want to deploy it on Android, but for the desktop (and server!) Linux use-case, the Nova driver is likely to be the major one.
I believe Google has developed a Rust implementation of the binder driver which was recently merged, and they are apparently planning to remove the original C implementation. Considering binder is essential for Android that would be a major step too.
Arch never failed me. The only time I remember it panicked was when I naively stopped an upgrade in the middle which failed to generate initramfs but quickly fixed it by chroot'ing and running mkinitcpio. Back up in no time
Will it be possible until Rust gets full GCC support, with all its platforms?
After all the resistance to Rust in the Linux Kernel, it's finally official. Kudos to the Linux Rust team!
wasn't there like a drive by maintainer rejection of something rust related that kind of disrupted the asahi project ? i can't say i followed the developments much but i do recall it being some of that classic linux kernel on broadway theater. i also wonder if that was a first domino falling of sorts for asahi, i legitimately can't tell if that project lives on anymore
IIRC, it was not about Rust vs. C, but a commotion rooted from patch quality and not pushing people around about things.
Linux Kernel team has this habit of a forceful pushback which breaks souls and hearts when prodded too much.
Looks like Hector has deleted his Mastodon account, so I can't look back what he said exactly.
Oh, I still have the relevant tab open. It's about code quality: https://news.ycombinator.com/item?id=43043312
That was a single message in a very large thread. It absolutely was not just about code quality.
Yes, that's the entry of the rabbit hole, and reader is advised to dig their own tunnel.
I remember following the tension for a bit. Yes, there are other subjects about how things are done, but after reading it, I remember framing "code quality" as the base issue.
In high-stakes software development environments, egos run high generally, and when people clash and doesn't back up, sparks happen. If this warning is ignored, then something has to give.
If I'm mistaken, I can enjoy a good explanation and be gladly stand corrected.
This is what happened here. This is probably the second or third time I witness this over 20+ years. Most famous one was over CPU schedulers, namely BFS, again IIRC.
> Looks like Hector has deleted his Mastodon account, so I can't look back what he said exactly.
https://archive.md/uLiWX
https://archive.md/rESxe
Thanks! Let me edit this in, too.
Edit: Nah, I can't, too late.
https://lkml.org/lkml/2025/2/6/1292
Some pretext: I'm a Rust skeptic and tired of Rust Evangelism Task Force and Rewrite in Rust movements.
---
Yes. I remember that message.
Also let's not forget what marcan said [0] [1].
In short, a developer didn't want their C codebase littered with Rust code, which I can understand, then the Rust team said that they can maintain that part, not complicating his life further (Kudos to them), and the developer lashing out to them to GTFO of "his" lawn (which I understand again, not condone. I'd have acted differently).
This again boils down to code quality matters. Rust is a small child when compared to the whole codebase, and weariness from old timers is normal. We can discuss behaviors till the eternity, but humans are humans. You can't just standardize everything.
Coming to marcan, how he behaved is a big no in my book, too. Because it's not healthy. Yes, the LKML is not healthy, but this is one of the things which makes you wrong even when you're right.
I'm also following a similar discussion list, which has a similar level of friction in some matters, and the correct thing is to taking some time off and touching grass when feeling tired and being close to burnout. Not running like a lit torch between flammable people.
One needs to try to be the better example esp. when the environment is not in an ideal shape. It's the hardest thing to do, but it's the most correct path at the same time.
[0]: https://web.archive.org/web/20250205004552mp_/https://lwn.ne...
[1]: https://lkml.org/lkml/2025/2/6/404
One perspective is that Rust appears to be forced into the Linux kernel through harassment and pressure. Instead of Rust being pulled, carefully, organically and friendly, and while taking good care of any significant objections. Objections like, getting the relevant features from unstable Rust into stable Rust, or getting a second compiler like gccrs (Linux kernel uses gcc for C) fully up and running, or ensuring that there is a specification (the specification donated by/from Ferrous Systems, might have significant issues), or prioritizing the kernel higher than Rust.
If I had been enthusiastic about Rust, and wanted to see if it could maybe make sense for Rust to be part of the Linux kernel[0], I would probably had turned my attention to gccrs.
What is then extra strange is that there have been some public hostility against gccrs (WIP Rust compiler for gcc) from the rustc (sole main Rust compiler primarily based on LLVM) camp.
It feels somewhat like a corporate takeover, not something where good and benign technology is most important.
And money is at stake as well, the Rust Foundation has a large focus on fundraising, like how their progenitors at Mozilla/Firefox have or had a large focus on fundraising. And then there are major Rust proponents who openly claim, also here on Hacker News, that software and politics are inherently entwined.
[0]: And also not have it as a strict goal to get Rust into the kernel, for there might be the possibility that Rust was discovered not to be a good fit; and then one could work on that lack of fit after discovery and maybe later make Rust a good fit.
>One perspective is that Rust appears to be forced into the Linux kernel through harassment and pressure. Instead of Rust being pulled, carefully, organically and friendly, and while taking good care of any significant objections. Objections like, getting the relevant features from unstable Rust into stable Rust, or getting a second compiler like gccrs (Linux kernel uses gcc for C) fully up and running, or ensuring that there is a specification (the specification donated by/from Ferrous Systems, might have significant issues), or prioritizing the kernel higher than Rust.
On the flip side there have been many downright sycophants of only C in the Linux kernel and have done every possible action to throttle and sideline the Rust for Linux movement.
There have been multiple very public statements made by other maintainers that they actively reject Rust in the kernel rather than coming in with open hearts and minds.
Why are active maintainers rejecting parts of code that are under the remit of responsibility?
I think the main issue is that it never was about how rust programmers should write more rust. Just like a religion it is about what other people should do. That's why you see so many abandoned very ambitious rust projects to tackle x,y or z now written in C to do them all over again (and throw away many years of hardening and bug fixes). The idea is that the original authors can be somehow manipulated into taking these grand gifts and then to throw away their old code. I'm waiting for the SQLite rewrite in rust any day now. And it is not just the language itself, you also get a package manager that you will have to incorporate into your workflow (with risks that are entirely its own), new control flow models, terminology and so on.
Rust should have done exactly one thing and do that as good as possible: be a C replacement and do that while sticking as close as possible to the C syntax. Now we have something that is a halfway house between C, C++, JavaScript (Node.js, actually), Java and possibly even Ruby with a syntax that makes perl look good and with a bunch of instability thrown in for good measure.
It's as if the Jehova's witnesses decided to get into tech and to convince the world of the error of its ways.
>I think the main issue is that it never was about how rust programmers should write more rust. Just like a religion it is about what other people should do.
>Rust should have done exactly one thing and do that as good as possible: be a C replacement and do that while sticking as close as possible to the C syntax.
Irony.
No, just Rust's original stated goal.
Personally, I observe that Rust is forced everywhere in the Linux ecosystem. One of my biggest concerns is uutils, mostly because of the permissive license it bears. The Linux kernel and immediate userspace shall be GPL licensed to protect the OS in my opinion.
I have a personal principle of not using LLVM-based languages (equal parts I don't like how LLVM people behave against GCC and I support free software first and foremost), so I personally watch gccrs closely, and my personal ban on Rust will be lifted the day gccrs becomes an alternative compiler.
This brings me to the second biggest reservation about Rust. The language is moving too fast, without any apparent plan or maturing process. Things are labeled unstable, there's no spec, and apparently nobody is working on these very seriously, which you also noted.
I don't understand why people are hostile against gccrs? Can you point me to some discussions?
> It feels somewhat like a corporate takeover, not something where good and benign technology is most important.
As I noted above, the whole Rust ecosystem feels like it's on a crusade, esp. against C++. I write C++, and I play with pointers a lot, and I understand the gotchas, and also the team dynamics and how it's becoming harder to write good software with larger teams regardless of programming language, but the way Rust propels itself forward leaves a very bad taste in the mouth, and I personally don't like to be forced into something. So, while gccrs will remove my personal ban, I'm not sure I'll take the language enthusiastically. On the other hand, another language Rust people apparently hate, Go, ticks all the right boxes as a programming language. Yes, they made some mistakes and turned back from some of them at the last moment, but the whole ordeal looks tidier and better than Rust.
In short, being able to borrow-check things is not a license to push people around like this, and they are building themselves a good countering force with all this enthusiasm they're pumping around.
Oh, I'll only thank them for making other programming languages improve much faster. Like how LLVM has stirred GCC devs into action and made GCC a much better compiler in no time.
The language is moving too fast? The language is moving extremely slowly imo, too slowly for a lot of features. C++ is moving faster at this point.
> In short, a developer didn't want their C codebase littered with Rust code
from reading those threads at the time, the developer in question was deliberately misunderstanding things. "their" codebase wasn't to be "littered" with Rust code - it wasn't touched at all.
https://www.phoronix.com/news/Alex-Gaynor-Rust-Maintainer
What are you trying to imply?
we shall C the resolve of the RUST maintainers they are working on borrowed time ....
"Yay, we got Rust in the Kernel! ^_^ Ok, now it's your code! Bye!"
Then another maintainer will take care of it? This is how kernel development works....
high quality educational material
I never used linux because it was built using C++. Never have a cared what language the product was built it. The Rust community however wants you to use a product just because it's implemented in Rust, or atleast as one of the selling points. For that reason I decided to avoid any product that advertises using Rust as a selling point, as much as I can. Was planning to switch from Mac to a Linux machine, not anymore, I'm upgrading my mac happily.
Does the removal of “experimental” now mean that all maintainers are now obligated to not break Rust code?
Yeah I was wondering about that one angry bearded guy in that movie. He will be very upset.
I am missing some context, did not follow the Linux/Rust drama. which guy?
linus torvalds
I can't find the movie, but there was a guy angry that stated that he should not have to learn Rust, nobody should. And Rust should not be in the kernel.
Is rust code part of user space?
This is the Linux kernel. User space can do whatever it wants.
That's a woosh moment
It might be if the original answer wasn't that ignorant of the question's origin.
Projects like GNOME are increasingly using Rust.
No maintainer is obligated to not break any part of Linux other than the user space API, there are no stable in-kernel APIs
What they mean is that the Linux kernel has a long-standing policy to keep the whole kernel compilable on every commit, so any commit that changes an internal API must also fix up _all_ the places where that internal API is used.
While Rust in the kernel was experimental, this rule was relaxed somewhat to avoid introducing a barrier for programmers who didn't know Rust, so their work could proceed unimpeded while the experiment ran. In other words, the Rust code was allowed to be temporarily broken while the Rust maintainers fixed up uses of APIs that were changed in C code.
So does this mean that the C developers might need to learn Rust or cooperate more with the rust developer team basically?
I guess in practice you'd want to have Rust installed as part of your local build and test environment. But I don't think you have to learn Rust any more (or any less) than you have to learn Perl or how the config script works.
As long as you can detect if/when you break it, you can then either quickly pick up enough to get by (if it's trivial), or you ask around.
The proof of the pudding will be in the eating, the rust community better step up in terms of long term commitment to the code they produce because that is the thing that will keep this code in the kernel. This is just first base.
Learn rust to a level where all cross language implications are understood, which includes all `unsafe` behaviour (...because you're interfacing with C).
Yes it does.
Depends on the change being made.
If they completely replace an API then sure, probably.
But for most changes, like adding a param to a function or a struct, they basically have to learn nothing.
Rust isn't unlike C either. You can write a lot of it in a pretty C like fashion.
>"Rust isn't unlike C either. You can write a lot of it in a pretty C like fashion."
I think that with all of the Rust's borrowing rules the statement is very iffy.
Rust's borrowing rules might force you to make different architecture choices than you would with C. But that's not what I was thinking about.
For a given rust function, where you might expect a C programmer to need to interact due to a change in the the C code, most of the lifetime rules will have already been hammered out before the needed updates to the rust code. It's possible, but unlikely, that the C programmer is going to need to significantly change what is being allocated and how.
C has a lot of the same borrowing rules, though, with some differences around strict aliasing versus the shared/exclusive difference in Rust.
Most of the things people stub their toe on in Rust coming from C are already UB in C.
That's exactly the original question.
There is, I understand, an expectation that if you do make breaking changes to kernel APIs, you fix the callers of such APIs. Which has been a point of contention, that if a maintainer doesn't know Rust, how would they fix Rust users of an API?
The Rust for Linux folks have offered that they would fix up such changes, at least during the experimental period. I guess what this arrangement looks like long term will be discussed ~now.
Without a very hard commitment that is going to be a huge hurdle to continued adoption, and kernel work is really the one place where rust has an actual place. Everywhere else you are most likely better off using either Go or Java.
Aren't large parts of a web browser and a runtime for a programming language also better written in Rust than in Go or Java?
I'd say no, access to a larger pool of programmers is an important ingredient in the decision of what you want to write something in. Netscape pre-dated Java which is why it was written in C/C++ and that is why we have rust in the first place. But today we do have Java which has all of the rust safety guarantees and then some, is insanely performant for network code and has a massive amount of mindshare and available programmers. For Go the situation is a bit less good but if you really need that extra bit of performance (and you almost never really do) then it might be a good choice.
> But today we do have Java which has all of the rust safety guarantees and then some, is insanely performant for network code and has a massive amount of mindshare and available programmers.
I'm not entirely convinced that Java has much mindshare among system programmers. During my 15 years career in the field I haven't heard "Boy I wish we wrote this <network driver | user-space offload networking application | NIC firmware> in Java" once.
I've seen plenty of networking code in Java, including very large scale video delivery platforms, near real time packet analysis and all kinds of stuff that I would have bet were not even possible in Java. If there is one thing that I'm really impressed by then it is how far Java has come performance wise, from absolutely dog slow to being an insignificant fraction away from low level languages. And I'm not a fan (to put it mildly) so you can take that as 'grudging respect'.
The 'factory factory' era of Java spoiled the language thoroughly for me.
All the mainstream browsers do their own low-level graphics rendering (e.g., of text and icons and such). Is Java performant enough to do that?
Absolutely. Again, not a fan of the language. But you can transcode 10's of video streams on a single machine into a whole pile of different output resolutions and saturate two NICs while you're at it. Java has been 'fast enough' for almost all purposes for the last 10 years at least if not longer.
The weak point will always be the startup time of the JVM which is why you won't see it be used for short lived processes. But for a long lived process like a browser I see no obstacles.
Does this mean that all architectures that Linux supports but Rust doesn't are straight in the bin?
No, Rust is in the kernel for driver subsystems. Core linux parts can't be written in Rust yet for the problem you mention. But new drivers *can* be written in Rust
Which ones would that be?
In order to figure this out I took the list of platforms supported by Rust from https://doc.rust-lang.org/nightly/rustc/platform-support.htm... and those supported by Linux from https://docs.kernel.org/arch/index.html, cleaned them up so they can be compared like for like and then put them into this python script:
Which yields:Both: {'aarch64', 'xtensa', 'sparc', 'm68k', 'mips64', 'sparc64', 'csky', 'riscv', 'powerpc64', 's390x', 'x86', 'powerpc', 'loongarch', 'mips', 'hexagon', 'arm', 'x86_64'}
Linux, but not Rust: {'nios2', 'microblaze', 'arc', 'openrisc', 'parisc', 's390', 'alpha', 'sh'}
Rust, but not Linux: {'avr', 'bpfel', 'amdcgn', 'wasm32', 'msp430', 'bpfeb', 'nvptx', 'wasm64'}
Personally, I've never used a computer from the "Linux, but not Rust" list, although I have gotten close to a DEC Alpha that was on display somewhere, and I know somebody who had a Sega Dreamcast (`sh`) at some point.
Well, GCC 15 already ended support for the nios2 soft-core. The successor to it is Nios V which runs RISC-V. If users want us update the kernel, they'll also need to update their FPGA.
Microblaze also is a soft-core, based on RISC-V, presumably it could support actual RISC-V if anyone cared.
All others haven't received new hardware within the last 10 years, everybody using these will already be running an LTS kernel on there.
It looks like there really are no reasons not to require rust for new versions of the kernel from now on then!
Is this the same NIOS that runs on FPGA? We wrote some code for it during digital design in university, and even an a.out was terribly slow, can't imagine running a full kernel. Though that could have been the fault of the hardware or IP we were using.
It’s an interesting list from the perspective of what kind of project Linux is. Things like PA-RISC and Alpha were dead even in the 90s (thanks to the successful Itanium marketing push convincing executives not to invest in their own architectures), and SuperH was only relevant in the 90s due to the Dreamcast pushing volume. That creates an interesting dynamic where Linux as a hobbyist OS has people who want to support those architectures, but Linux as the dominant server and mobile OS doesn’t want to hold back 99.999999+% of the running kernels in the world.
There was a time when it came to 64 bit support Alpha really was the only game in town where you could buy a server without adding a sixth zero to the bill. It was AMD, not Itanium that killed Alpha.
some of the platforms on that list are not supported well enough to say they actually have usable rust, e.g m68k
I've certainly used parisc and alpha, though not for some years now.
https://lwn.net/Articles/1045363/
> Rust, which has been cited as a cause for concern around ensuring continuing support for old architectures, supports 14 of the kernel's 20-ish architectures, the exceptions being Alpha, Nios II, OpenRISC, PARISC, and SuperH.
Its strange to me that Linux dropped Itanium two years ago but they decided to keep supporting Alpha and PA-RISC.
I do wonder how many humans beings are running the latest linux kernel on Alpha.
> supports 14 of the kernel's 20-ish architectures
That's a lot better than I expected to be honest, I was thinking maybe Rust supported 6-7 architectures in total, but seems Rust already has pretty wide support. If you start considering all tiers, the scope of support seems enormous: https://doc.rust-lang.org/nightly/rustc/platform-support.htm...
And more specifically which ones that anyone would use a new kernel on?
Some people talk about 68k not being supported being a problem
m68k Linux is supported by Rust, even in the LLVM backend.
Rust also has an experimental GCC-based codegen backend (based on libgccjit (which isn't used as a JIT)).
So platforms that don't have either LLVM nor recent GCC are screwed.
how on earth is linux being compiled for platforms without a GCC?
additionally, I believe the GCC backend is incomplete. the `core` library is able to compile, but rust's `std` cannot be.
>nor recent GCC are screwed.
Not having a recent GCC and not having GCC are different things. There may be architectures that have older GCC versions, but are no longer supported for more current C specs like C11, C23, etc.
I don't believe Rust for Linux use std. I'm not sure how much of Rust for Linux the GCC/Rust effort(s) are able to compile, but if it was "all of it" I'm sure we'd have heard about it.
Yes. Cooked.
Hmm seeing as this looks to be the path forward as far as inkernel Linux drivers are concerned , adnfor BSDs like FreeBSD that port said drivers to their own kernel. Are we going to see the same oxidation of the BSD's or resistance and bifurcation.
This seems big. Is this big?
It's a big dependency added alright.
Is that all it is?
Well - what is actually written in Rust so far that is in the kernel?
https://rust-for-linux.com/ shows a list.
Yup.
So what is written in Rust so far therein?
> Linux's DRM Panic "Screen of Death"
> "There is no particular reason to do it in rust, I just wanted to learn rust, and see if it can work in the kernel."
https://www.phoronix.com/news/Linux-DRM-Panic-QR-Codes
We certainly don’t want the BSOD to crash. So that’s a reason.
Mostly GPU drivers
Rightfully so, its probably the place where you'll het most ROI
I don't understand why. Working with hardware you're going to have to do various things with `unsafe`. Interfacing to C (the rest of the kernel) you'll have to be using `unsafe`.
In my mind, the reasoning for rust in this situation seems flawed.
I’ve done a decent amount of low level code - (never written a driver but I’m still young). The vast majority of it can be safe, and call into unsafe wrapped when needed. My experience is that a very very small amount of stuff actually needs unsafe and the only reason the unsafe C code is used is because it’s possible not because it’s necessary.
It is interesting how many very experienced programmers have not yet learned your lesson, so you may be young but you are doing very well indeed. Kudos.
The amount of unsafe is pretty small and limited to where the interfacing with io or relevant stuff is actually happening. For example (random selection) https://lkml.org/lkml/2023/3/7/764 has a few unsafes, but either: a) trivial structure access, or b) documented regarding why it's valid. The rest of the code still benefits.
But that's not the only reason Rust is useful - see the end of the write-up from the module author https://asahilinux.org/2022/11/tales-of-the-m1-gpu/ ("Rust is magical!" section)
The point is that lots of the code won't be `unsafe`, and therefore you benefit from memory safety in those parts.
And we're cooked.
Happy to hear. We should embrace new things.
Great. Python in the kernel!
I'm mostly interested in being able to interact with the vfs and build a 9p rust client (in addition to the current v9fs) to support my 9p server [0].
Does anyone know what's the current state / what is the current timeline for something like this to be feasible?
[0] https://github.com/Barre/ZeroFS
I need to check the rust parts of the kernel, I presume there is significant amounts of unsafe. Is unsafe Rust a bit better nowadays? I remember a couple of years ago people complained that unsafe is really hard to write and very "un-ergonomic".
The idea is that most of the unsafe code to interact with the C API of the kernel is abstracted in the kernel crate, and the drivers themselves should use very little amount of unsafe code, if any.
Wouldn't that be by design? If it isn't pleasant, people will avoid using it if they can.
(but as others pointed out, unsafe should only be used at the 'edges', the vast majority should never need to use unsafe)
I don't think unsafe Rust has gotten any easier to write, but I'd also be surprised if there was much unsafe except in the low-level stuff (hard to write Vec without unsafe), and to interface with C which is actually not hard to write.
Mostly Rust has been used for drivers so far. Here's the first Rust driver I found:
https://github.com/torvalds/linux/blob/2137cb863b80187103151...
It has one trivial use of `unsafe` - to support Send & Sync for a type - and that is apparently temporary. Everything else uses safe APIs.
Drivers are interesting from a safety perspective, because on systems without an IOMMU sending the wrong command to devices can potentially overwrite most of RAM. For example, if the safe wrappers let you write arbitrary data to a PCIe network card’s registers you could retarget a receive queue to the middle of a kernel memory page.
> if the safe wrappers let you write arbitrary data to a PCIe network card’s registers
Functions like that can and should be marked unsafe in rust. The unsafe keyword in rust is used both to say “I want this block to have access to unsafe rust’s power” and to mark a function as being only callable from an unsafe context. This sounds like a perfect use for the latter.
> Functions like that can and should be marked unsafe in rust.
That's not how it works. You don't mark them unsafe unless it's actually required for some reason. And even then, you can limit that scope to a line or two in majority of cases. You mark blocks unsafe if you have to access raw memory and there's no way around it.
So if you're writing a device driver in rust...
- Hardware access is unsafe - Kernel interface is unsafe
How much remains in the layer in-between that's actually safe?
All logic, all state management, all per-device state machines, all command parsing and translation, all data queues, etc. Look at the examples people posted in other comments.
And yet, the Linux kernel's Rust code uses unstable features only available on a nightly compiler.
Not optimal for ease of compilation and building old versions of the Kernel. (You need a specific version of the nightly compiler to build a specific version of the Kernel)
> Not optimal for ease of compilation and building old versions of the Kernel. (You need a specific version of the nightly compiler to build a specific version of the Kernel)
It's trivial to install a specific version of the toolchain though.
Don't the C parts of Linux heavily depend on GCC extensions too? Seems depending on specific compiler features isn't really a blocker.
The difference probably is that GCC extensions have been stable for decades. Meanwhile Rust experimental features have breaking changes between versions. So a Rust version 6 months from now likely won't be able to compile the kernel we have today, but a GCC version in a decade will still work.
luckily downloading a specific nightly version is only a single rustup command
Besides supply chain attacks, what could go wrong ? /s
Kernel doesn't use Cargo.
What if I told you… there was a simple, constant environment variable you could set to make the stable compiler forget it isn’t a nightly?
the point is unstable features aren't guaranteed to not break after a compiler update. hence the specific version thing.
Can you please share more details about this?
https://rust-for-linux.com/unstable-features#unstable-featur...
really? I recently read that "A 100% Rust kernel is now upstream in Linux 7.4"
That was from an AI hallucinating HN 10 years from now: https://news.ycombinator.com/item?id=46205632
you're joking because of the other frontpage story with Gemini 3 hallucinating hacker news 10 years in the future, but still lets keep the hallucinations to that page.
That's what you get with slippery slopes.
The title sounded worse than it is.
C++ devs are spinning in their graves now.
Why should they?
Other platforms don't have a leader that hates C++, and then accepts a language that is also quite complex, even has two macro systems of Lisp like wizardy, see Serde.
OSes have been being written with C++ on the kernel, since the late 1990's, and AI is being powered by hardware (CUDA) that was designed specifically to accomodate C++ memory model.
Also Rust compiler depends on a compiler framework written in C++, without it there is no Rust compiler, and apparently they are in no hurry to bootstrap it.
The kernel is mostly C code, not C++.
Or installing Haiku!
The C++ that have the humility to realize are using Rust.
Nah, they just smile when they see LLVM and GCC on Rust build dependencies. :)
Yeah they’re all just tools. From a technical perspective, rust, C and C++ work together pretty well. Swift too. LLVM LTO can even do cross language inlining.
I don’t think C++ is going anywhere any time soon. Not with it powering all the large game engines, llvm, 30 million lines of google chrome and so on. It’s an absolute workhorse.
By the way, M$$$ has already making the transition of its driver writing in Rust
https://github.com/microsoft/windows-drivers-rs
maybe this will be good for the rest of the kernels, IllumOS/HaikuOS/ReactOS.
maybe
Not a system programmer -- at this point, does C hold any significant advantage over Rust? Is it inevitable that everything written in C is going to be gradually converted to safer languages?
C currently remains the language of system ABIs, and there remains functionality that C can express that Rust cannot (principally bitfields).
Furthermore, in terms of extensions to the language to support more obtuse architecture, Rust has made a couple of decisions that make it hard for some of those architectures to be supported well. For example, Rust has decided that the array index type, the object size type, and the pointer size type are all the same type, which is not the case for a couple of architectures; it's also the case that things like segmented pointers don't really work in Rust (of course, they barely work in C, but barely is more than nothing).
Can you expand on bitfields? There’s crates that implement bitfield structs via macros so while not being baked into the language I’m not sure what in practice Rust isn’t able to do on that front.
Yeah, not sure what they're saying... I use bitfields in multiple of my rust projects using those macros.
I'm not a rust or systems programmer but I think it meant that as an ABI or foreign function interface bitfields are not stable or not intuitive to use, as they can't be declared granularily enough.
C's bit-fields ABI isn't great either. In particular, the order of allocation of bit-fields within a unit and alignment of non-bit-field structure members are implementation defined (6.7.2.1). And bit-fields of types other than `_Bool`, `signed int` and `unsigned int` are extensions to the standard, so that somewhat limits what types can have bitfields.
Across binary libraries ABI, regardless of static or dynamically linked?
That first sentence though. Bitfields and ABI alongside each other.
Bitfield packing rules get pretty wild. Sure the user facing API in the language is convenient, but the ABI it produces is terrible (particularly in evolution).
I would like a revision to bitfields and structs to make them behave the way a programmer things, with the compiler free to suggest changes which optimize the layout. As well as some flag that indicates the compiler should not, it's a finalized structure.
I'm genuinely surprised that usize <=> pointer convertibility exists. Even Go has different types for pointer-width integers (uintptr) and sizes of things (int/uint). I can only guess that Rust's choice was seen as a harmless simplification at the time. Is it something that can be fixed with editions? My guess is no, or at least not easily.
There is a cost to having multiple language-level types that represent the exact same set of values, as C has (and is really noticeable in C++). Rust made an early, fairly explicit decision that a) usize is a distinct fundamental type from the other types, and not merely a target-specific typedef, and b) not to introduce more types for things like uindex or uaddr or uptr, which are the same as usize on nearly every platform.
Rust worded in its initial guarantee that usize was sufficient to roundtrip a pointer (making it effectively uptr), and there remains concern among several of the maintainers about breaking that guarantee, despite the fact that people on the only target that would be affected basically saying they'd rather see that guarantee broken. Sort of the more fundamental problem is that many crates are perfectly happy opting out of compiling for weirder platform--I've designed some stuff that relies on 64-bit system properties, and I'd rather like to have the ability to say "no compile for you on platform where usize-is-not-u64" and get impl From<usize> for u64 and impl From<u64> for usize. If you've got something like that, it also provides a neat way to say "I don't want to opt out of [or into] compiling for usize≠uptr" and keeping backwards compatibility.
If you want to see some long, gory debates on the topic, https://internals.rust-lang.org/t/pre-rfc-usize-is-not-size-... is a good starting point.
> ...not to introduce more types for things like uindex or uaddr or uptr, which are the same as usize on nearly every platform. ... there remains concern among several of the maintainers about breaking that guarantee, despite the fact that people on the only target that would be affected basically saying they'd rather see that guarantee broken.
The proper approach to resolving this in an elegant way is to make the guarantee target-dependent. Require all depended-upon crates to acknowledge that usize might differ from uptr in order to unlock building for "exotic" architectures, much like how no-std works today. That way "nearly every platform" can still rely on the guarantee with no rise in complexity.
> Is it something that can be fixed with editions? My guess is no, or at least not easily.
Assuming I'm reading these blog posts [0, 1] correctly, it seems that the size_of::<usize>() == size_of::<*mut u8>() assumption is changeable across editions.
Or at the very least, if that change (or a similarly workable one) isn't possible, both blog posts do a pretty good job of pointedly not saying so.
[0]: https://faultlore.com/blah/fix-rust-pointers/#redefining-usi...
[1]: https://tratt.net/laurie/blog/2022/making_rust_a_better_fit_...
Great info!
Personally, I like 3.1.2 from your link [0] best, which involves getting rid of pointer <=> integer casts entirely, and just adding methods to pointers, like addr and with_addr. This needs no new types and no new syntax, though it does make pointer arithmetic a little more cumbersome. However, it also makes it much clearer that pointers have provenance.
I think the answer to "can this be solved with editions" is more "kinda" rather than "no"; you can make hard breaks with a new edition, but since the old editions must still be supported and interoperable, the best you can do with those is issue warnings. Those warnings can then be upgraded to errors on a per-project basis with compiler flags and/or Cargo.toml options.
In what architecture are those types different? Is there a good reason for it there architecturally, or is it just a toolchain idiosyncrasy in terms of how it's exposed (like LP64 vs. LLP64 etc.)?
CHERI has 64-bit object size but 128-bit pointers (because the pointer values also carry pointer provenance metadata in addition to an address). I know some of the pointer types on GPUs (e.g., texture pointers) also have wildly different sizes for the address size versus the pointer size. Far pointers on segmented i386 would be 16-bit object and index size but 32-bit address and pointer size.
There was one accelerator architecture we were working that discussed making the entire datapath be 32-bit (taking less space) and having a 32-bit index type with a 64-bit pointer size, but this was eventually rejected as too hard to get working.
I guess today, instead of 128bit pointers we have 64bit pointers and secret provenance data inside the cpu, at least on the most recent shipped iPhones and Macs.
In the end, I’m not sure that’s better, or maybe we should have had extra large pointers again (in that way back 32bit was so large we stuffed other stuff in there) like CHERI proposes (though I think it still has secret sidecar of data about the pointers).
Would love to Apple get closer to Cheri. They could make a big change as they are vertically integrated, though I think their Apple Silicon for Mac moment would have been the time.
I wonder what big pointers does to performance.
It's not secret, it just reuses some of the unused address bits.
> that C can express that Rust cannot
The reverse is probably more true, though. Rust has native SIMD support for example, while in standard C there is no way to express that.
'C is not a low-level language' is a great blog post about the topic.
AFAIK std::simd is still nightly only. You can use the raw intrinsics in std::arch, but that's not any better than "#include <immintrin.h>".
So use the fairly common SIMD extensions in C, that's not much of an argument.
Also you can't do self-referential strutcs.
Double-linked lists are also pain to implement, and they're are heavily used in kernel.
> Also you can't do self-referential strutcs.
You mean in safe rust? You can definitely do self-referential structs with unsafe and Pin to make a safe API. Heck every future generated by the compiler relies on this.
There's going to be ~zero benefit of Rust safety mechanisms in the kernel, which I anticipate will be littered with the unsafe sections. I personally see this move not as much technical as I see it as a political lobbying one (US gov). IMHO for Linux kernel development this is not a good news since it will inevitably introduce friction in development and consequently reduce the velocity, and most likely steer away certain amount of long-time kernel developers too.
Don’t spread FUD, you can check some example code yourself.
https://git.kernel.org/pub/scm/linux/kernel/git/a.hindborg/l...
Sorry but what have I said wrong? The nature of code written in kernel development is such that using unsafe is inevitable. Low-level code with memory juggling and patterns that you usually don't find in application code.
And yes, I have had a look into the examples - maybe one or two years there was a significant patch submitted to the kernel and number of unsafe sections made me realize at that moment that Rust, in terms of kernel development, might not be what it is advertised for.
> https://git.kernel.org/pub/scm/linux/kernel/git/a.hindborg/l..
Right? Thank you for the example. Let's first start by saying the obvious - this is not an upstream driver but a fork and it is also considered by its author to be a PoC at best. You can see this acknowledged by its very web page, https://rust-for-linux.com/nvme-driver, by saying "The driver is not currently suitable for general use.". So, I am not sure what point did you try to make by giving something that is not even a production quality code?
Now let's move to the analysis of the code. The whole code, without crates, counts only 1500 LoC (?). Quite small but ok. Let's see the unsafe sections:
rnvme.rs - 8x unsafe sections, 1x SyncUnsafeCell used for NvmeRequest::cmd (why?)
nvme_mq/nvme_prp.rs - 1x unsafe section
nvme_queue.rs - 6x unsafe not sections but complete traits
nvme_mq.rs - 5x unsafe sections, 2x SyncUnsafeCell used, one for IoQueueOperations::cmd second for AdminQueueOperations::cmd
In total, this is 23x unsafe sections/traits over 1500LoC, for a driver that is not even a production quality driver. I don't have time but I wonder how large this number would become if all crates this driver is using were pulled in into the analysis too.
Sorry, I am not buying that argument.
The idea behind the safe/unsafe split is to provide safe abstractions over code that has to be unsafe.
The unsafe parts have to be written and verified manually very carefully, but once that's done, the compiler can ensure that all further uses of these abstractions are correct and won't cause UB.
Everything in Rust becomes "unsafe" at some lower level (every string has unsafe in its implementation, the compiler itself uses unsafe code), but as long as the lower-level unsafe is correct, the higher-level code gets safety guarantees.
This allows kernel maintainers to (carefully) create safe public APIs, which will be much safer to use by others.
C doesn't have such explicit split, and its abstraction powers are weaker, so it doesn't let maintainer create APIs that can't cause UB even if misused.
> I am not sure what point did you try to make by giving something that is not even a production quality code?
let's start by prefacing that 'production quality' C is 100% unsafe in Rust terms.
> Sorry, I am not buying that argument.
here's where we fundamentally disagree: you listed a couple dozen unsafe places in 1.5kLOC of code; let's be generous and say that's 10% - and you're trying to sell it as a bad thing, whereas I'm seeing the same numbers and think it's a great improvement over status quo ante.
> let's start by prefacing that 'production quality' C is 100% unsafe in Rust terms.
I don't know what one should even make from that statement.
> here's where we fundamentally disagree: you listed a couple dozen unsafe places in 1.5kLOC of code; let's be generous and say that's 10%
It's more than 10%, you didn't even bother to look at the code but still presented it, what in reality is a toy driver example, as something credible (?) to support your argument of me spreading FUD. Kinda silly.
Even if it was only that much (10%), the fact it is in the most crucial part of the code makes the argument around Rust safety moot. I am sure you heard of 90/10 rule.
The time will tell but I am not holding my breath. I think this is a bad thing for Linux kernel development.
> I don't know what one should even make from that statement.
it's just a fact. by definition of the Rust language unsafe Rust is approximately as safe as C (technically Rust is still safer than C in its unsafe blocks, but we can ignore that.)
> you didn't even bother to look at the code but still presented
of course I did, what I've seen were one-liner trait impls (the 'whole traits' from your own post) and sub-line expressions of unsafe access to bindings.
> technically Rust is still safer than C in its unsafe blocks
This is quite dubious in a practical sense, since Rust unsafe blocks must manually uphold the safety invariants that idiomatic Safe Rust relies on at all times, which includes, e.g. references pointing to valid and properly aligned data, as well as requirements on mutable references comparable to what the `restrict` qualifier (which is rarely used) involves in C. In practice, this is hard to do consistently, and may trigger unexpected UB.
Some of these safety invariants can be relaxed in simple ways (e.g. &Cell<T> being aliasable where &mut T isn't) but this isn't always idiomatic or free of boilerplate in Safe Rust.
It's great that the Google Android team has been tracking data to answer that question for years now and their conclusion is:
-------
The primary security concern regarding Rust generally centers on the approximately 4% of code written within unsafe{} blocks. This subset of Rust has fueled significant speculation, misconceptions, and even theories that unsafe Rust might be more buggy than C. Empirical evidence shows this to be quite wrong.
Our data indicates that even a more conservative assumption, that a line of unsafe Rust is as likely to have a bug as a line of C or C++, significantly overestimates the risk of unsafe Rust. We don’t know for sure why this is the case, but there are likely several contributing factors:
-----From https://security.googleblog.com/2025/11/rust-in-android-move...
So, unsafe block every 70 LoC in 1500 LoC toy example? Sure, it's a strong argument.
How is that worse than everything being unsafe?
I've seen this argument thrown around often here in HN ("$IMPROVEMENT is still not perfect! So let's keep the statu quo.") and it baffles me.
C is not perfect and it still replaced ASM in 99% of its use cases.
Yes it does, alongside C++, Rust isn't available everywhere.
There are industry standards based in C, or C++, and I doubt they will be adopting Rust any time soon.
POSIX (which does require a C compiler for certification), OpenGL, OpenCL, SYSCL, Aparavi, Vulkan, DirectX, LibGNM(X), NVN, CUDA, LLVM, GCC, Unreal, Godot, Unity,...
Then plenty of OSes like the Apple ecosystem, among many other RTOS and commercial endevours for embedded.
Also the official Rust compiler isn't fully bootstraped yet, thus it depends on C++ tooling for its very existence.
Which is why efforts to make C and C++ safer are also much welcomed, plenty of code out there is never going to be RIR, and there are domains where Rust will be a runner up for several decades.
I agree with you for the most part, but it's worth noting that those standards can expose a C API/ABI while being primarily implemented in and/or used by non-C languages. I think we're a long way away from that, but there's no inherent reason why C would be a going concern for these in the Glorious RIIR Future:tm:.
Every system under the Sun has a C compiler. This isn't remotely true for Rust. Rust is more modern than C, but has it's own issues, among others very slow compilation times. My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
There is a set of languages which are essentially required to be available on any viable system. At present, these are probably C, C++, Perl, Python, Java, and Bash (with a degree of asterisks on the last two). Rust I don't think has made it through that door yet, but on current trends, it's at the threshold and will almost certainly step through. Leaving this set of mandatory languages is difficult (I think Fortran, and BASIC-with-an-asterisk, are the only languages to really have done so), and Perl is the only one I would risk money on departing in my lifetime.
I do firmly expect that we're less than a decade out from seeing some reference algorithm be implemented in Rust rather than C, probably a cryptographic algorithm or a media codec. Although you might argue that the egg library for e-graphs already qualifies.
Java hasn't ever been essential outside enterprise-y servers and those don't care about "any viable system".
We're already at the point where in order to have a "decent" desktop software experience you _need_ Rust too. For instance, Rust doesn't support some niche architectures because LLVM doesn't support them (those architectures are now exceedingly rare) and this means no Firefox for instance.
A system only needs one programming language to be useful, and when there's only one it's basically always C.
A C compiler is also relatively easy to implement (compared to a Rust compiler) if you're making your own hobby OS with its own C compiler and libc.
> There is a set of languages which are essentially required to be available on any viable system. At present, these are probably C, C++, Perl, Python, Java, and Bash
Java, really? I don’t think Java has been essential for a long time.
Is Perl still critical?
From my experience of building LFS it appears to be needed to built as part of the tool chain.
https://linuxfromscratch.org/lfs/view/development/chapter07/...
It's extremely hard to build any unixly userspace without perl dependencies.
I really do wonder who (as in "which packages") still depends on perl. Surely it can't be that many?
Edit:
On my (Arch) system removing perl requires removing: auto{conf,make}, git, llvm, glibmm, among others.
critical at build time, not runtime
> Rust I don't think has made it through that door yet, but on current trends, it's at the threshold and will almost certainly step through.
I suspect what we are going to see isn't so much Rust making it through that door, but LLVM. Rust and friends will come along for the ride.
Removing Java from my desktop Arch system only prompts to remove two gui apps I barely use. I could do it right now and not notice for months.
Even Perl... It's not in POSIX (I'm fairly sure) and I can't imagine there is some critical utility written in Perl that can't be rewritten in Python or something else (and probably already has been).
As much as I like Java, Java is also not critical for OS utilities. Bash shouldn't be, per se, but a lot of scripts are actually Bash scripts, not POSIX shell scripts and there usually isn't much appetite for rewriting them.
Debian and Ubuntu enter the chat.....
I don't think Perl comes pre installed on any modern system anymore
I am yet to find one without it, unless you include Windows, consoles, or table/phone OSes, embedded RTOS, which aren't anyway proper UNIX derivatives.
According to the internet (can't check now) Alpine doesn't come with perl installed.
So a distribution mostly used to deploy containers.
"Alpine Linux is a general purpose Linux distribution" https://wiki.alpinelinux.org/wiki/Tutorials_and_Howtos
I don't agree with "any modern system" but some modern systems certainly don't come with perl.
What other newfangled alternative to C was ever adopted in the Linux kernel?
I have no doubt C will be around for a long time, but I think Rust also has a lot of staying power and won’t soon be replaced.
I wouldn't be surprised to see zig in the kernel at some point
I would be. Mostly because while Zig is better than C, it doesn't really provide all that much benefit, if you already have Rust.
I personally feel the Zig is a much better fit to the kernel. It's C interoperability is far better than Rust's, it has a lower barrier to entry for existing C devs and it doesn't have the constraints the Rust does. All whilst still bringing a lot of the advantages.
...to the extent I could see it pushing Rust out of the kernel in the long run. Rust feels like a sledgehammer to me where the kernel is concerned.
It's problem right now is that it's not stable enough. Language changes still happen, so it's the wrong time to try.
From a safety perspective there isn't a huge benefit to choosing Zig over C with the caveat, as others have pointed out, that you need to enable more tooling in C to get to a comparable level. You should be using -Wall and -fsanitize=address among others in your debug builds.
You do get some creature comforts like slices (fat pointers) and defer (goto replacement). But you also get forced to write a lot of explicit conversions (I personally think this is a good thing).
The C interop is good but the compiler is doing a lot of work under the hood for you to make it happen. And if you export Zig code to C... well you're restricted by the ABI so you end up writing C-in-Zig which you may as well be writing C.
It might be an easier fit than Rust in terms of ergonomics for C developers, no doubt there.
But I think long-term things like the borrow checker could still prove useful for kernel code. Currently you have to specify invariants like that in a separate language from C, if at all, and it's difficult to verify. Bringing that into a language whose compiler can check it for you is very powerful. I wouldn't discount it.
I’m not so sure. The big selling point for Rust is making memory management safe without significant overhead.
Zig, for all its ergonomic benefits, doesn’t make memory management safe like Rust does.
I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.
And it seems unlikely they’d go through all the effort of porting safer Rust code into less safe Zig code just for ergonomics.
IMHO Zig doesn't bring enough value of its own to be worth bearing the cost of another language in the kernel.
Rust is different because it both:
- significantly improve the security of the kernel by removing the nastiest class of security vulnerabilities.
- And reduce cognitive burden for contributors by allowing to encode in thr typesystem the invariants that must be upheld.
That doesn't mean Zig is a bad language for a particular project, just that it's not worth adding to an already massive project like the Linux kernel. (Especially a project that already have two languages, C and now Rust).
Pardon my ignorance but I find the claim "removing the nastiest cla ss of security vulnerabilities" to be a bold claim. Is there ZERO use of "unsafe" rust in kernel code??
Aside from the minimal use of unsafe being heavily audited and the only entry point for those vulnerabilities, it allows for expressing kernel rules explicitly and structurally whereas at best there was a code comment somewhere on how to use the API correctly. This was true because there was discussion precisely about how to implement Rust wrappers for certain APIs because it was ambiguous how those APIs were intended to work.
So aside from being like 1-5% unsafe code vs 100% unsafe for C, it’s also more difficult to misuse existing abstractions than it was in the kernel (not to mention that in addition to memory safety you also get all sorts of thread safety protections).
In essence it’s about an order of magnitude fewer defects of the kind that are particularly exploitable (based on research in other projects like Android)
"Unsafe" rust still upholds more guarantees than C code. The rust compiler still enforces the borrow checker (including aliasing rules) and type system.
You can absolutely write drivers with zero unsafe Rust. The bridge from Rust to C is where unsafe code lies.
And hardware access. You absolutely can't write a hardware driver without unsafe.
Not zero, but Rust-based kernels (see redox, hubris, asterinas, or blog_os) have demonstrated that you only need a small fraction of unsafe code to make a kernel (3-10%) and it's also the least likely places to make a memory-related error in a C-based kernel in the first place (you're more likely to make a memory-related error when working on the implementation of an otherwise challenging algorithm that has nothing to do with memory management itself, than you are when you are explicitly focused on the memory-management part).
So while there could definitely be an exploitable memory bug in the unsafe part of the kernel, expect those to be at least two orders of magnitude less frequent than with C (as an anecdotal evidence, the Android team found memory defects to be between 3 and 4 orders of magnitude less in practice over the past few years).
It removes a class of security vulnerabilities, modulo any unsound unsafe (in compiler, std/core and added dependency).
In practice you see several orders of magnitude fewer segfaults (like in Google Android CVE). You can compare Deno and Bun issue trackers for segfaults to see it in action.
As mentioned a billion times, seatbelts don't prevent death, but they do reduce the likelihood of dying in a traffic accident. Unsafe isn't a magic bullet, but it's a decent caliber round.
Zig as a language is not worth, but as a build system it's amazing. I wouldn't be surprised if Zig gets in just because of the much better build system than C ever had (you can cross compile not only across OS, but also across architecture and C stlib versions, including musl). And with that comes the testing system and seamless interop with C, which make it really easy to start writing some auxiliary code in Zig... and eventually it may just be accepted for any development.
I agree with you that it's much more interesting than the language, but I don't think it matters for a project like the Kernel that already had its build system sorted out. (Especially since no matter how nice and convenient Zig makes cross-compilation if you start a project from scratch, even in Rust thanks to cargo-zigbuild, it would require a lot of efforts to migrate the Linux build system to Zig, only to realize it doesn't support all the needs of the kernel at the start).
As mentioned in another thread, POSIX requires the presence of a C compiler,
https://pubs.opengroup.org/onlinepubs/9799919799/utilities/c...
Though to be fair, POSIX is increasingly outdated.
GNU/Linux isn't the only system out there, POSIX last update was in 2024 (when C requirement got upgraded to C17), and yes some parts of it are outdated in modern platforms, especially Apple, Google, Microsoft OSes and Cloud infrastructure.
Still, who supports POSIX is expected to have a C compiler available if applying for the OpenGroup stamp.
I can’t think of many real world production systems which don’t have a rust target. Also I’m hopeful the GCC backend for rustc makes some progress and can become an option for the more esoteric ones
There aren't really any "systems programming" platforms anywhere near production that doesn't have a workable rust target.
It's "embedded programming" where you often start to run into weird platforms (or sub-platforms) that only have a c compiler, or the rust compiler that does exist is somewhat borderline. We are sometimes talking about devices which don't even have a gcc port (or the port is based on a very old version of gcc). Which is a shame, because IMO, rust actually excels as an embedded programming language.
Linux is a bit marginal, as it crosses the boundary and is often used as a kernel for embedded devices (especially ones that need to do networking). The 68k people have been hit quite hard by this, linux on 68k is still a semi-common usecase, and while there is a prototype rust back end, it's still not production ready.
There’s also I believe an effort to target C as the mid level although I don’t know the state / how well it’ll work in an embedded space anyway where performance really matters and these compilers have super old optimizers that haven’t been updated in 3 decades.
It's mostly embedded / microcontroller stuff. Things that you would use something like SDCC or a vendor toolchain for. Things like the 8051, stm8, PIC or oddball things like the 4 cent Padauk micros everyone was raving about a few years ago. 8051 especially still seems to come up from time to time in things like the ch554 usb controller, or some NRF 2.4ghz wireless chips.
Those don’t really support C in any real stretch, talking about general experience with microcontrollers and closed vendor toolchains; it’s a frozen dialect of C from decades ago which isn’t what people think of when they say C (usually people mean at least the 26 year old C99 standard but these often at best support C89 or even come with their own limitations)
It’s still C though, and rust is not an option. What else would you call it? Lots of c libraries for embedded target C89 syntax for exactly these reasons. Also for what it’s worth, SDCC seems to support very modern versions of C (up to C23), so I also don’t think that critique is very valid for the 8051 or stm8. I would argue that c was built with targets like this in mind and it is where many of its quirks that seem so anachronistic today come from (for example int being different sizes on different targets)
Commercial embedded OSes, game consoles, for example.
Seems like game consoles for example has been accomplished by at least one dedicated team even if the vendor nor upstream provide official support: https://www.reddit.com/r/rust/comments/78bowa/hey_this_is_ky...
I’m sure if Rust becomes more popular in the game dev community, game consoles support will be a solved problem since these consoles are generally just running stock PC architectures with a normal OS and there’s probably almost nothing wrong with the stock toolchain in terms of generating working binaries: PlayStation and Switch are FreeBSD (x86 vs ARM respectively) and Xbox is x86 Windows, all of which are supported platforms.
Being supported on a games console means that you can produce binaries that are able to go through the whole approval process, and there is day 1 support for anything that is provided by the console vendor.
Otherwise is yak shaving instead of working on the actual game code.
Some people like to do that, that is how new languages like Rust get adoption, however they are usually not the majority, hence why mainstream adoption without backing from killer projects or companies support is so hard, and very few make it.
Also you will seldom see anyone throw away the confort of Unreal, Unity or even Godot tooling, to use Rust instead, unless they are more focused on proving the point the game can be made in Rust, than the actual game design experience.
> Switch
Good luck shipping arbitrary binaries to this target. The most productive way for an indie to ship to Nintendo in 2025 is to create the game Unity and build via the special Nintendo version of the toolchain.
How long do we think it would take to fully penetrate all of these pipeline stages with rust? Particularly Nintendo, who famously adopts the latest technology trends on day 1. Do we think it's even worthwhile to create, locate, awaken and then fight this dragon? C# with incremental GC seems to be more than sufficient for a vast majority of titles today.
Not only Unity, Capcom is using their own fork of .NET in their game engine.
Devil May Cry for the Playstation 5 was shipped with it.
One problem of Rust vs C, I rarely see mentioned is that Rust code is hard to work with from other languages. If someone writes a popular C library, you can relatively easily use it in your Java, D, Nim, Rust, Go, Python, Julia, C#, Ruby, ... program.
If someone writes a popular Rust library, it's only going to be useful to Rust projects.
With Rust I don’t even know if it’s possible to make borrow checking work across a language boundary. And Rust doesn't have a stable ABI, so even if you make a DLL, it’ll only work if compiled with the exact same compiler version. *sigh*
> very slow compilation times
That isn't always the case. Slow compilations are usually because of procedural macros and/or heavy use of generics. And even then compile times are often comparable to languages like typescript and scala.
Compared to C, rust compiles much slower. This might not matter on performant systems, but when resources are constrained you definitely notice it. And if the whole world is rewritten in Rust, this will have a non-significant impact on the total build time of a bunch of projects.
Runtime memory issues probably have a significantly higher total cost.
But of course that doesn't negate this cost, but building can always be done on another machine as a way to circumvent the problem.
typescript transpilation to js is nearly instant, it's not comparable
Good to know, my $DAYJOB project unfortunately isn’t as well informed and we have to use esbuild just to survive.
Doesn’t rustc emit LLVM IR? Are there a lot of systems that LLVM doesn’t support?
There are a number of oddball platforms LLVM doesn't support, yeah.
rustc can use a few different backends. By my understanding, the LLVM backend is fully supported, the Cranelift backend is either fully supported or nearly so, and there's a GCC backend in the works. In addition, there's a separate project to create an independent Rust frontend as part of GCC.
Even then, there are still some systems that will support C but won't support Rust any time soon. Systems with old compilers/compiler forks, systems with unusual data types which violate Rust's assumptions (like 8 bit bytes IIRC)
Many organizations and environments will not switch themselves to LLVM to hamfist compiled Rust code. Nor is the fact of LLVM supporting something in principle means that it's installed on the relevant OS distribution.
Using LLVM somewhere in the build doesn't require that you compile everything with LLVM. It generates object files, just like GCC, and you can link together object files compiled with each compiler, as long as they don't use compiler-specific runtime libraries (like the C++ standard library, or a polyfill compiler-rt library).
`clang-cl` does this with `cl.exe` on Windows.
If you're developing, you generally have control over the development environment (+/-) and you can install things. Plus that already reduces the audience: set of people with oddball hardware (as someone here put it) intersected with the set of people with locked down development environments.
Let alone the fact that conceptually people with locked down environments are precisely those would really want the extra safety offered by Rust.
I know that real life is messy but if we don't keep pressing, nothing improves.
> If you're developing, you generally have control over the development environment
If you're developing something individually, then sure, you have a lot of control. When you're developing as part of an organization or a company, you typically don't. And if there's non-COTS hardware involved, you are even more likely not to have control.
> My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
if only due to the Lindy Effect
https://en.wikipedia.org/wiki/Lindy_effect
> Every system under the Sun has a C compiler... My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
This is still the worst possible argument for C. If C persists in places no one uses, then who cares?
I think you didn't catch their drift
C will continue to be used because it always has been and always will be available everywhere, not only places no one uses :/
> C will continue to be used because it always has been and always will be available everywhere
Yes, you can use it everywhere. Is that what you consider a success?
I'm curious as to which other metric you'd use to define successful? If it actual usage, C still wins. Number of new lines pushed into production each year, or new project started, C is still high up the list, would be my guess.
Languages like Rust a probably more successful in terms of age vs. adoption speed. There's just a good number of platforms which aren't even supported, and where you have no other choice than C. Rust can't target most platforms, and it compiles on even less. Unless Linux want's to drop support for a good number of platforms, Rust adoption can only go so far.
... yes?
> it always has been and always will be available everywhere
"Always has been" is pushing it. Half of C's history is written with outdated, proprietary compilers that died alongside their architecture. It's easy to take modern tech like LLVM for granted.
This might actually be a solved problem soonish, LLMs are unreasonably effective at writing compilers and C is designed to be easy to write a compiler for, which also helps. I don’t know if anyone tried, but there’s been related work posted here on HN recently: a revived Java compiler and the N64 decompilation project. Mashed together you can almost expect to be able to generate C compilers for obscure architectures on demand given just some docs and binary firmware dumps.
You almost certainly have a bunch of devices containing a microcontroller that runs an architecture not targeted by LLVM. The embedded space is still incredibly fragmented.
That said, only a handful of those architectures are actually so weird that they would be hard to write a LLVM backend for. I understand why the project hasn’t established a stable backend plugin API, but it would help support these ancillary architectures that nobody wants to have to actively maintain as part of the LLVM project. Right now, you usually need to use a fork of the whole LLVM project when using experimental backends.
> You almost certainly have a bunch of devices containing a microcontroller that runs an architecture not targeted by LLVM.
This is exactly what I'm saying. Do you think HW drives SW or the other way around? When Rust is in the Linux kernel, my guess is it will be very hard to find new HW worth using, which doesn't have some Rust support.
In embedded, HW drives SW much more than the other way around. Most microcontrollers are not capable of running a Linux kernel as it is, even with NoMMU. The ones that are capable of this are overwhelmingly ARM or RISC-V these days. There’s not a long list of architectures supported by the modern Linux kernel but not LLVM/Rust.
How many of them are running Linux, and how many of them are running a modern kernel?
There are certain styles of programming and data structure implementations that end up requiring you to fight Rust at almost every step. Things like intrusive data structures, pointer manipulation and so on. Famously there is an entire book online on how to write a performant linked list in idiomatic Rust - something that is considered straightforward in C.
For these cases you could always use Zig instead of C
Given Zig's approach to safety, you can get the same in C with static and runtime analysis tools, that many devs keep ignoring.
Already setting the proper defaults on a Makefile would get many people half way there, without changing to language yet to be 1.0, and no use-after-free story.
it is not straightforward in rust because the linked list is inherently tricky to implement correctly. Rust makes that very apparent (and, yeah, a bit too apparent).
I know, a linked list is not exactly super complex and rust makes that a bit tough. But the realisation one must have is this: building a linked list will break some key assumptions about memory safety, so trying to force that into rust is just not gonna make it.
Problem is I guess that for several of us, we have forgotten about memory safety and it's a bit painful to have that remembered to us by a compiler :-)
Can you elaborate, what key assumptions about memory safety linked lists break? Sure, double linked lists may have non-trivial ownership, but that doesn't compromise safety.
what is an intrusive data structure?
A container class that needs cooperation from the contained items, usually with special data fields. For example, a doubly linked list where the forward and back pointers are regular member variables of the contained items. Intrusive containers can avoid memory allocations (which can be a correctness issue in a kernel) and go well with C's lack of built-in container classes. They are somewhat common in C and very rare in C++ and Rust.
At least for a double linked list you can probably get pretty far in terms of performance in the non-intrusive case, if your compiler unboxes the contained item into your nodes? Or are there benefits left in intrusive data structures that this doesn't capture?
The main thing is that he object can be a member of various structures. It can be in big general queue and in priority queue for example. Once you find it and deal with it you can remove it from both without needing to search for it.
Same story for games where it can be in the list of all objects and in the list of objects that might be attacked. Once it's killed you can remove it from all lists without searching for it in every single one.
Storing the data in nodes doesn't work if the given structure may need to be in multiple linked lists, which iirc was a concern for the kernel?
And generally I'd imagine it's quite a weird form for data structures for which being in a linked list isn't a core aspect (no clue what specifically the kernel uses, but I could imagine situations where where objects aren't in any linked list for 99% of time, but must be able to be chained in even if there are 0 bytes of free RAM ("Error: cannot free memory because memory is full" is probably not a thing you'd ever want to see)).
> Storing the data in nodes doesn't work if the given structure may need to be in multiple linked lists
That is why kernel mostly (always?) uses intrusive linked lists. They have no such problem.
> if your compiler unboxes the contained item into your nodes
Is there known compilers that can do that?
A data structure that requires you to change the data to use it.
Like a linked list that forces you to add a next pointer to the record you want to store in it.
Or just build a tested unsafe implementation as a library. For example the Linked List in the standard library.
https://doc.rust-lang.org/src/alloc/collections/linked_list....
Yeah, if you need a linked list (you probably don't) use that. If however you are one of the very small number of people who need fine-grained control over a tailored data-structure with internal cross-references or whatnot then you may find yourself in a world where Rust really does not believe that you know what you are doing and fights you every step of the way. If you actually do know what you are doing, then Zig is probably the best modern choice. The TigerBeetle people chose Zig for these reasons, various resources on the net explain their motivations.
The point with the linked list is that it is perfectly valid to use unsafe to design said ”tailored data structure with internal cross-reference or what not” library and then expose a safe interface.
If you’re having trouble designing a safe interface for your collection then that should be a signal that maybe what you are doing will result in UB when looked at the wrong way.
That is how all standard library collections in Rust works. They’ve just gone to the length of formally verifying parts of the code to ensure performance and safety.
>>That is how all standard library collections in Rust works
Yeah and that's what not going to work for high performance data structures because you need to embed hooks into the objects - not just put objects into a bigger collection object. Once you think in terms of a collection that contains things you have already lost that specific battle.
Another thing that doesn't work very well in Rust (from my understanding, I tried it very briefly) is using multiple memory allocators which is also needed in high performance code. Zig takes care to make it easy and explicit.
> If you’re having trouble designing a safe interface for your collection then that should be a signal that maybe what you are doing will result in UB when looked at the wrong way.
Rust is great, but there are some things that are safe (and you could prove them safe in the abstract), but that you can't easily express in Rust's type system.
More specifically, there are some some things and usage pattern of these things that are safe when taken together. But the library can't force the safe usage pattern on the client, with the tools that Rust provides.
If you can't create a safe interface and must have the function then create an unsafe function and clearly document the invariants and then rely on the user to uphold them?
Take a look at the unsafe functions for the standard library Vec type to see examples of this:
https://doc.rust-lang.org/std/vec/struct.Vec.html#method.fro...
I think that misses the point though. C trusts you to design your own linked list.
It also trusts your neighbor, your kid, your LLM, you, your dog, another linked list...
Before we ask if almost all things old will be rewritten in Rust, we should ask if almost all things new are being written in Rust or other memory-safe languages?
Obviously not. When will that happen? 15 years? Maybe it's generational: How long before developers 'born' into to memory-safe languages as serious choices will be substantially in charge of software development?
I don't know I tend to either come across new tools written in Rust, JavaScript or Python but relatively low amount of C. The times I see some "cargo install xyz" in a git repo of some new tool is definitely noticeable.
I'm a bit wary if this is hiding an agist sentiment, though. I doubt most Rust developers were 'born into' the language, but instead adopted it on top of existing experience in other languages.
People can learn Rust at any age. The reality is that experienced people often are more hesitant to learn new things.
I can think of possible reasons: Early in life, in school and early career, much of what you work on is inevitably new to you, and also authorities (professor, boss) compel you to learn whatever they choose. You become accustomed to and skilled at adapting new things. Later, when you have power to make the choice, you are less likely to make yourself change (and more likely to make the junior people change, when there's a trade-off). Power corrupts, even on that small scale.
There's also a good argument for being stubborn and jaded: You have 30 years perfecting the skills, tools, efficiencies, etc. of C++. For the new project, even if C++ isn't as good a fit as Rust, are you going to be more efficient using Rust? How about in a year? Two years? ... It might not be worth learning Rust at all; ROI might be higher continuing to invest in additional elite C++ skills. Certainly that has more appeal to someone who knows C++ intimately - continue to refine this beautiful machine, or bang your head against the wall?
For someone without that investment, Rust might have higher ROI; that's fine, let them learn it. We still need C++ developers. Morbid but true, to a degree: 'Progress happens one funeral at a time.'
> experienced people often are more hesitant to learn new things
I believe the opposite. There's some kind of weird mentality in beginner/wannabe programmers (and HR, but that's unrelated) that when you pick language X then you're an X programmer for life.
Experienced people know that if you need a new language or library, you pick up a new language or library. Once you've learned a few, most of them aren't going to be very different and programming is programming. Of course it will look like work and maybe "experienced" people will be more work averse and less enthusiastic than "inexperienced" (meaning younger) people.
>Experienced people know that if you need a new language or library, you pick up a new language or library.
That heavily depends, if you tap into a green field project, yes. Or free reign over a complete rewrite of existing projects. But these things are more the exception than the regular case.
Even on green field project, ecosystem and available talents per framework will be a consideration most of the time.
There are also other things like being parent and wanting to take care of them that can come into consideration later in life. So more like more responsibilities constraints perspectives and choices than power corrupts in purely egoistic fashion.
I still think you're off the mark. Again, most existing Rust developers are not "blank slate Rust developers". That they do not rush out to rewrite all of their past projects in C++ may be more about sunk costs, and wanting to solve new problems with from-scratch development.
> most existing Rust developers are not "blank slate Rust developers"
Not most, but the pool of software devs has been doubling every five years, and Rust matches C# on "Learning to Code" voters at Stack Overflow's last survey, which is crazy considering how many people learn C# just to use Unity. I think you underestimate how many developers are Rust blank slates.
Anecdotically, I've recently come across comments from people who've taught themselves Rust but not C or C++.
Steve Klabnik?
Either way, that survey (you could have linked to it) has some issues.
https://survey.stackoverflow.co/2025/technology#most-popular...
Select the "Learning to Code" tab.
> Which programming, scripting, and markup languages have you done extensive development work in over the past year, and which do you want to work in over the next year? (If you both worked with the language and want to continue to do so, please check both boxes in that row.)
It describes two check marks, yet only has a single statistic for each language. Did StackOverflow mess up the question or data?
The data also looks suspicious. Is it really the case that 44.6% of developers learning to code have worked with or want to work with C++?
Oh I agree the survey has issues, I was just thinking about how each year the stats get more questionable! I just think it shows that interest in Rust doesn't come only from people with a C++ codebase to rewrite. Half of all devs have got less than five years of experience with any toolchain at all, let alone C++, yet many want to give Rust a try. I do think there will be a generational split there.
> Steve Klabnik?
Thankfully no. I've actually argued with him a couple times. Usually in partial agreement, but his fans will downvote anyone who mildly disagrees with him.
Also, I'm not even big on Rust: every single time I've tried to use it I instinctively reached for features that turned out to be unstable, and I don't want to deal with their churn, so I consider the language still immature.
Steve Klabnik, why do you cite a survey, and then turn around in the next post and begin to be skeptical about that survey?
Steve has no active alter-egos on HN. Stop with your ridiculous behavior.
Okay I had upvoted you but now you're just being an asshole. Predictable from someone on multiple fucking throwaways created just to answer on a single post and crap on a piece of tech I suppose; I don't even care much about Rust. And I'm sorry to inform you I'm not Klabnik, but delusions are free: Maybe you think everyone using -nik is actually the same person and you've uncovered a conspiracy. Congrats on that.
I'd shove you a better data point but people aren't taking enough surveys for our sake, that's the one we've got. Unless you want to go with Jetbrains', which, spoilers, skews towards technologies supported by Jetbrains; I'm not aware of other major ones.
> Okay I had upvoted you but now you're just being an asshole.
Completely wrong, your description fits yourself, not me. Do better than constantly lying and being your own description, Steve Klabnik.
This behavior is weird. Your parent writes nothing like me.
The only alt I’ve made on hacker news is steveklabnik1, or whatever I called it, because I had locked myself out of this account. pg let me back in and so I stopped using it.
Giving you the benefit of the doubt at first was completely wrong, that much I agree.
That's fair; my claims are kept simplistic for purposes of space and time. However, I'm talking about new projects, not rewriting legacy code.
According to the strange data at https://survey.stackoverflow.co/2025/technology#most-popular... , 44.6% have responded positively to that question regarding C++. But there may be some issues, for the question involves two check boxes, yet there is only one statistic.
Sure there are plenty of them, hence why you seem remarks like wanting to use Rust but with a GC, or assigned to Rust features that most ML derived languages have.
> Obviously not
Is it obvious? I haven't heard of new projects in non-memory-safe languages lately, and I would think they would struggle to attract contributors.
New high-scale data infrastructure projects I am aware of mostly seem to be C++ (often C++20). A bit of Rust, which I’ve used, and Zig but most of the hardcore stuff is still done in C++ and will be for the foreseeable future.
It is easy to forget that the state-of-the-art implementations of a lot of systems software is not open source. They don’t struggle to attract contributors because of language choices, being on the bleeding edge of computer science is selling point enough.
There's a "point of no return" when you start to struggle to hire anyone on your teams because no one knows the language and no one is willing to learn. But C++ is very far from it.
There's always someone willing to write COBOL for the right premium.
I'm working on Rust projects, so I may have incomplete picture, but I'm from what I see when devs have a choice, they prefer working with Rust over C++ (if not due to the language, at least due to the build tooling).
Writing C++ is easier than writing Rust. But writing safe multithreaded code in C++?
I don't want to write multithreaded C++ at all unless I explicitly want a new hole in my foot. Rust I barely have any experience with, but it might be less frustrating than that.
Game development, graphics and VFX industry, AI tooling infrastructure, embedded development, Maker tools like Arduino and ESP32, compiler development.
https://github.com/tigerbeetle/tigerbeetle
Zig at least claims some level of memory safety in their marketing. How real that is I don't know.
About as real as claiming that C/C++ is memory safe because of sanitizers IMHO.
I mean, Zig does have non-null pointers. It prevents some UB. Just not all.
Which you can achieve in C and C++ with static analysis rules, breaking compilation if pointers aren't checked for nullptr/NULL before use.
Zig would have been a nice proposition in the 20th century, alongside languages like Modula-2 and Object Pascal.
I have heard different arguments, such as https://zackoverflow.dev/writing/unsafe-rust-vs-zig/ .
I'm unaware of any such marketing.
Zig does claim that it
> ... has a debug allocator that maintains memory safety in the face of use-after-free and double-free
which is probably true (in that it's not possible to violate memory safety on the debug allocator, although it's still a strong claim). But beyond that there isn't really any current marketing for Zig claiming safety, beyond a heading in an overview of "Performance and Safety: Choose Two".
Runtime checks can only validate code paths taken, though. Also, C sanitizers are quite good as well nowadays.
That's a library feature (not intended for release builds), not a language feature.
It is intended for release builds. The ReleaseSafe target will keep the checks. ReleaseFast and ReleaseSmall will remove the checks, but those aren't the recommended release modes for general software. Only for when performance or size are critical.
Out of curiosity, do the LLMs all use memory safe languages?
Whenever the public has heard about the language it's always been Python.
The language that implements Python's high-speed floating point has often been FORTRAN.
https://fortranwiki.org/fortran/show/Python
With lots of CUDA C++ libraries, among others.
Llama.cpp is called Llama.cpp, so there’s that…
C is fun to write. I can write Rust, but I prefer writing C. I prefer compiling C, I prefer debugging C. I prefer C.
It's a bit like asking with the new mustang on the market, with airbags and traction control, why would you ever want to drive a classic mustang?
It’s okay to enjoy driving an outdated and dangerous car for the thrill because it makes pleasing noise, as long as you don’t annoy too much other people with it.
> I prefer debugging C
I prefer not having to debug... I think most people would agree with that.
I prefer a billion dallars tax free, but here we are:(
In Rust dev, I haven't needed Valgrind or gdb in years, except some projects integrating C libraries.
Probably kernel dev isn't as easy, but for application development Rust really shifts majority of problems from debugging to compile time.
If it wasn't clear, I have to debug Rust code waaaay less than C, for two reasons:
1. Memory safety - these can be some of the worst bugs to debug in C because they often break sane invariants that you use for debugging. Often they break the debugger entirely! A classic example is forgetting to return a value from a non-void function. That can trash your stack and end up causing all sorts of impossible behaviours in totally different parts of the code. Not fun to debug!
2. Stronger type system - you get an "if it compiles it works" kind of experience (as in Haskell). Obviously that isn't always the case, but I can sometimes write several hundred lines of Rust and once it's compiling it works first time. I've had to suppress my natural "oh I must have forgotten to save everything or maybe incremental builds are broken or something" instinct when this happens.
Net result is that I spend at least 10x less time in a debugger with Rust than I do with C.
Do you also want some unicorns as well?
Didn't save Cloudflare. Memory safety is necessary, and memory safety guard rails can be helpful depending on what the guard rails cost in different ways, but memory safety is in no way, shape or form sufficient.
https://blog.cloudflare.com/incident-report-on-memory-leak-c...
better to crash than leak https keys to the internet
It’s available on more obscure platforms than Rust, and more people are familiar with it.
I wouldn’t say it’s inevitable that everything will be rewritten in Rust, at the very least this will this decades. C has been with us for more than half a century and is the foundation of pretty much everything, it will take a long time to migrate all that.
More likely is that they will live next to each other for a very, very long time.
Apple handled this problem by adding memory safety to C (Firebloom). It seems unlikely they would throw away that investment and move to Rust. I’m sure lots of other companies don’t want to throw away their existing code, and when they write new code there will always be a desire to draw on prior art.
That's a rather pessimistic take compared to what's actually happening. What you say should apply to the big players like Amazon, Google, Microsoft, etc the most, because they arguably have massive C codebases. Yet, they're also some of the most enthusiastic adopters and promoters of Rust. A lot of other adopters also have legacy C codebases.
I'm not trying to hype up Rust or disparage C. I learned C first and then Rust, even before Rust 1.0 was released. And I have an idea why Rust finds acceptance, which is also what some of these companies have officially mentioned.
C is a nice little language that's easy to learn and understand. But the price you pay for it is in large applications where you have to handle resources like heap allocations. C doesn't offer any help there when you make such mistakes, though some linters might catch them. The reason for this, I think, is that C was developed in an era when they didn't have so much computing power to do such complicated analysis in the compiler.
People have been writing C for ages, but let me tell you - writing correct C is a whole different skill that's hard and takes ages to learn. If you think I'm saying this because I'm a bad programmer, then you would be wrong. I'm not a programmer at all (by qualification), but rather a hardware engineer who is more comfortable with assembly, registers, Bus, DRAM, DMA, etc. I still used to get widespread memory errors, because all it takes is a lapse in attention while coding. That strain is what Rust alleviates.
So you try to say c is for good programmers only and rust let also the idiots Programm? I think that’s the wrong way to argue for rust. Rust catches one kind of common problem but but does not magically make logic errors away.
No, they are not saying that at all??
> It seems unlikely [Apple] would throw away that investment and move to Rust.
Apple has invested in Swift, another high level language with safety guarantees, which happens to have been created under Chris Lattner, otherwise known for creating LLVM. Swift's huge advantage over Rust, for application and system programming is that it supports an ABI [1] which Rust, famously, does not (other than falling back to a C ABI, which degrades its promises).
[1] for more on that topic, I recommend this excellent article: https://faultlore.com/blah/swift-abi/ Side note, the author of that article wrote Rust's std::collections API.
Swift does not seem suitable for OS development, at least not as much as C or C++.[0] Swift handles by default a lot of memory by using reference counting, as I understand it, which is not always suitable for OS development.
[0]: Rust, while no longer officially experimental in the Linux kernel, does not yet have major OSs written purely in it.
There's an allocation-free subset.
https://www.swift.org/get-started/embedded/
Rust's approach is overkill, I think. A lot of reference counting and stuff is just fine in a kernel.
But at least a lot of tasks in a kernel would require something else than reference counting, unless it can be guaranteed that the reference counting is optimized away or something, right?
What matters is what Apple thinks, and officially it is, to the point it is explicitly written on the documentation.
The practical reality is arguably more important than beliefs. Apple has, as it turns out, invested in trying to make Swift more suitable for kernel and similar development, like trying to automate away reference counting when possible, and also offering Embedded Swift[0], an experimental subset of Swift with significant restrictions on what is allowed in the language. Maybe Embedded Swift will be great in the future, and it is true that Apple investing into that is significant, but it doesn't seem like it's there.
> Embedded Swift support is available in the Swift development snapshots.
And considering Apple made Embedded Swift, even Apple does not believe that regular Swift is suitable. Meaning that you're undeniably completely wrong.
[0]:
https://github.com/swiftlang/swift-evolution/blob/main/visio...
You show a lack of awareness that ISO C and C++ are also not applicable, because on those domains the full ISO language standard isn't available, which is why freestanding is a thing.
But freestanding is not experimental, unlike Embedded Swift according to Apple. And there are full, large OS kernels written in C and C++.
You continue being undeniably, completely wrong.
Is it really? It always depends on which specific C compiler, and target platform we are talking about.
For Apple it suffices that it is fit for purpose for Apple itself, it is experimental for the rest of the world.
I love to be rightly wrong.
Apple is extending Swift specifically for kernel development.
Nothing wrong with using reference counting for OS development.
Even kernel development? Do you know of kernels where reference counting is the norm? Please do mention examples.
Linux? http://oldvger.kernel.org/~davem/skb.html
Is this even a fair question? A common response to pointing out that Oberon and the other Wirth languages where used to write several OS’s (using full GC in some cases) is that they don’t count, just like Minix doesn’t count for proof of microkernels. The prime objection being they are not large commercial OS’s. So, if the only two examples allowed are Linux and Windows (and maybe MacOS) then ‘no’ there are no wide spread, production sized OS’s that use GC or reference counting.
The big sticking point for me is that for desktop and server style computing, the hardware capabilities have increased so much that a good GC would for most users be acceptable at the kernel level. The other side to that coin is that then OS’s would need to be made on different kernels for large embedded/tablet/low-power/smart phone use cases. I think tech development has benefitted from Linux being used at so many levels.
A push to develop a new breed of OS, with a microkernel and using some sort of ‘safe’ language should be on the table for developers. But outside of proprietary military/finance/industrial (and a lot of the work in these fields are just using Linux) areas there doesn’t seem to be any movement toward movement toward a less monolithic OS situation.
Also, Swift Embedded came out of the effort to eventually use Swift instead for such use cases at Apple.
Rust still compiles into bigger binary sizes than C, by a small amount. Although it’s such a complex thing depending on your code that it really depends case-by-case, and you can get pretty close. On embedded systems with small amounts of ram (think on the order of 64kbytes), a few extra kb still hurts a lot.
A C compiler is easier to bootstrap than a Rust compiler.
It's a bit like asking if there is any significant advantage to ICE motors over electric motors. They both have advantages and disadvantages. Every person who uses one or the other, will tell you about their own use case, and why nobody could possibly need to use the alternative.
There's already applications out there for the "old thing" that need to be maintained, and they're way too old for anyone to bother with re-creating it with the "new thing". And the "old thing" has some advantages the "new thing" doesn't have. So some very specific applications will keep using the "old thing". Other applications will use the "new thing" as it is convenient.
To answer your second question, nothing is inevitable, except death, taxes, and the obsolescence of machines. Rust is the new kid on the block now, but in 20 years, everybody will be rewriting all the Rust software in something else (if we even have source code in the future; anyone you know read machine code or punch cards?). C'est la vie.
I think it'll be less like telegram lines- which were replaced fully for a major upgrade in functionality, and more like rail lines- which were standardized and ubiquitous, still hold some benefit but mainly only exist in areas people don't venture nearly as much.
Shitloads of already existing libraries. For example I'm not going to start using it for Arduino-y things until all the peripherals I want have drivers written in Rust.
Why? You can interact with C libraries from Rust just fine.
But you now have more complexity and no extra safety.
If you create wrappers that provide additional type information, you do get extra safety and nicer interfaces to work with.
You have extra safety in new code.
Ubiquity is still a big one. There's many, many places where C exists that Rust has not reached yet.
You mean safer languages like Fil-C.
Depends of what you do.
Rust has a nice compiler-provided static analyzer using annotation to do life time analysis and borrow checking. I believe borrow checking to be a local optima trap when it comes to static analysis and finds it often more annoying to use than I would like but I won't argue it's miles better than nothing.
C has better static analysers available. They can use annotations too. Still, all of that is optional when it's part of Rust core language. You know that Rust code has been successfully analysed. Most C code won't give you this. But if it's your code base, you can reach this point in C too.
C also has a fully proven and certified compiler. That might be a requirement if you work in some niche safety critical applications. Rust has no equivalent there.
The discussion is more interesting when you look at other languages. Ada/Spark is for me ahead of Rust both in term of features and user experience regarding the core language.
Rust currently still have what I consider significant flaws: a very slow compiler, no standard, support for a limited number of architectures (it's growing), it's less stable that I consider it should be given its age, and most Rust code tends to use more small libraries than I would personaly like.
Rust is very trendy however and I don't mean that in a bad way. That gives it a lot of leverage. I doubt we will ever see Ada in the kernel but here we are with Rust.
you can look at rust sources of real system programs like framekernel or such things. uefi-rs etc.
there u can likely explore well the boundaries where rust does and does not work.
people have all kind of opinions. mine is this:
if you need unsafe peppered around, the only thing rust offers is being very unergonomic. its hard to write and hard to debug for no reason. Writing memory-safe C code is easy. The problems rust solves arent bad, just solved in a way thats way more complicated than writing same (safe) C code.
a language is not unsafe. you can write perfectly shit code in rust. and you can write perfectly safe code in C.
people need to stop calling a language safe and then reimplementing other peoples hard work in a bad way creating whole new vulnerabilities.
I disagree. Rust shines when you need perform "unsafe" operations. It forces programmers to be explicit and isolate their use of unsafe memory operations. This makes it significantly more feasible to keep track of invariants.
It is completely besides the point that you can also write "shit code" in Rust. Just because you are fed up with the "reimplement the world in Rust" culture does not mean that the tool itself is bad.
>does C hold any significant advantage over Rust
Yes, it's lots of fun. rust is Boring.
If I want to implement something and have fun doing it, I ll always do it in C.
For my hobby code, I'm not going to start writing Rust anytime soon. My code is safe enough and I like C as it is. I don't write software for martian rovers, and for ordinary tasks, C is more ergonomic than Rust, especially for embedded tasks.
For my work code, it all comes down to SDKs and stuff. For example I'm going to write firmware for Nordic ARM chip. Nordic SDK uses C, so I'm not going to jump through infinite number of hoops and incomplete rust ports, I'll just use official SDK and C. If it would be the opposite, I would be using Rust, but I don't think that would happen in the next 10 years.
Just like C++ never killed C, despite being perfect replacement for it, I don't believe that Rust would kill C, or C++, because it's even less convenient replacement. It'll dilute the market, for sure.
> Just like C++ never killed C, despite being perfect replacement for it
I think c++ didn't replace C because it is a bad language. It did not offer any improvements on the core advantages of C.
Rust however does. It's not perfect, but it has a substantially larger chance of "replacing" C, if that ever happens.
A lot of C's popularity is with how standard and simple it is. I doubt Rust will be the safe language of the future, simply because of its complexity. The true future of "safe" software is already here, JavaScript.
There will be small niches leftover:
* Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
* OS / Kernel - Nearly all of the relevant code is unsafe. There aren't many real benefits. It will happen anyways due to grant funding requirements. This will take decades, maybe a century. A better alternative would be a verified kernel with formal methods and a Linux compatibility layer, but that is pie in the sky.
* Game Engines - Rust screwed up its standard library by not putting custom allocation at the center of it. Until we get a Rust version of the EASTL, adoption will be slow at best.
* High Frequency Traders - They would care about the standard library except they are moving on from C++ to VHDL for their time-sensitive stuff. I would bet they move to a garbage-collected language for everything else, either Java or Go.
* Browsers - Despite being born in a browser, Rust is unlikely to make any inroads. Mozilla lost their ability to make effective change and already killed their Rust project once. Google has probably the largest C++ codebase in the world. Migrating to Rust would be so expensive that the board would squash it.
* High-Throughput Services - This is where I see the bulk of Rust adoption. I would be surprised if major rewrites aren't already underway.
> No memory allocation means no Rust benefits.
This isn't really true; otherwise, there would be no reason for no_std to exist. Data race safety is independent of whether you allocate or not, lifetimes can be handy even for fixed-size arenas, you still get bounds checks, you still get other niceties like sum types/an expressive type system, etc.
> OS / Kernel - Nearly all of the relevant code is unsafe.
I think that characterization is rather exaggerated. IIRC the proportion of unsafe code in Redox OS is somewhere around 10%, and Steve Klabnik said that Oxide's Hubris has a similarly small proportion of unsafe code (~3% as of a year or two ago) [0]
> Browsers - Despite being born in a browser, Rust is unlikely to make any inroads.
Technically speaking, Rust already has. There has been Rust in Firefox for quite a while now, and Chromium has started allowing Rust for third-party components.
[0]: https://news.ycombinator.com/item?id=42312699
[1]: https://old.reddit.com/r/rust/comments/bhtuah/production_dep...
The Temporal API in Chrome is implemented in Rust. We’re definitely seeing more Rust in browsers including beyond Firefox.
> Google has probably the largest C++ codebase in the world. Migrating to Rust would be so expensive that the board would squash it.
Google is transitioning large parts of Android to Rust and there is now first-party code in Chromium and V8 in Rust. I’m sure they’ll continue to write new C++ code for a good while, but they’ve made substantial investments to enable using Rust in these projects going forward.
Also, if you’re imagining the board of a multi-trillion dollar market cap company is making direct decisions about what languages get used, you may want to check what else in this list you are imagining.
Unless rules have changed Rust is only allowed a minor role in Chrome.
> Based on our research, we landed on two outcomes for Chromium.
> We will support interop in only a single direction, from C++ to Rust, for now. Chromium is written in C++, and the majority of stack frames are in C++ code, right from main() until exit(), which is why we chose this direction. By limiting interop to a single direction, we control the shape of the dependency tree. Rust can not depend on C++ so it cannot know about C++ types and functions, except through dependency injection. In this way, Rust can not land in arbitrary C++ code, only in functions passed through the API from C++.
> We will only support third-party libraries for now. Third-party libraries are written as standalone components, they don’t hold implicit knowledge about the implementation of Chromium. This means they have APIs that are simpler and focused on their single task. Or, put another way, they typically have a narrow interface, without complex pointer graphs and shared ownership. We will be reviewing libraries that we bring in for C++ use to ensure they fit this expectation.
-- https://security.googleblog.com/2023/01/supporting-use-of-ru...
Also even though Rust would be a safer alternative to using C and C++ on the Android NDK, that isn't part of the roadmap, nor the Android team provides any support to those that go down that route. They only see Rust for Android internals, not app developers
If anything, they seem more likely to eventually support Kotlin Native for such cases than Rust.
> Unless rules have changed Rust is only allowed a minor role in Chrome.
I believe the Chromium policy has changed, though I may be misinterpreting: https://chromium.googlesource.com/chromium/src/+/refs/heads/...
The V8 developers seem more excited to use it going forward, and I’m interested to see how it turns out. A big open question about the Rust safety model for me is how useful it is for reducing the kind of bugs you see in a SOTA JIT runtime.
> They only see Rust for Android internals, not app developers
I’m sure, if only because ~nobody actually wants to write Android apps in Rust. Which I think is a rational choice for the vast majority of apps - Android NDK was already an ugly duckling, and a Rust version would be taken even less seriously by the platform.
That looks only to be the guidelines on how to integrate Rust projects, not that the policy has changed.
The NDK officially it isn't to write apps anyway, people try to do that due to some not wanting to touch Java or Kotlin, but that has never been the official point of view from Android team since it was introduced in version 2, rather:
> The Native Development Kit (NDK) is a set of tools that allows you to use C and C++ code with Android, and provides platform libraries you can use to manage native activities and access physical device components, such as sensors and touch input. The NDK may not be appropriate for most novice Android programmers who need to use only Java code and framework APIs to develop their apps. However, the NDK can be useful for cases in which you need to do one or more of the following:
> Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.
> Reuse your own or other developers' C or C++ libraries.
So it would be expected that at least for the first scenario, Rust would be a welcomed addition, however as mentioned they seem to be more keen into supporting Kotlin Native for it.
> No memory allocation means no Rust benefits
There are memory safety issues that literally only apply to memory on the stack, like returning dangling pointers to local variables. Not touching the heap doesn't magically avoid all of the potential issues in C.
> Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
Embedded Dev here and I can report that Rust has been more and more of a topic for me. I'm actually using it over C or C++ in a bare metal application. And I don't get where the no allocation -> no benefit thing comes from, Rust gives you much more to work with on bare metal than C or C++ do.
In robotics too.
Rust is already making substantial inroads in browsers, especially for things like codecs. Chrome also recently replaced FreeType with Skrifa (Rust), and the JS Temporal API in V8 is implemented in Rust.
> No memory allocation means no Rust benefits.
Memory safety applies to all memory. Not just heap allocated memory.
This is a strange claim because it's so obviously false. Was this comment supposed to be satire and I just missed it?
Anyway, Rust has benefits beyond memory safety.
> Rust is also too complex for smaller systems to write compilers.
Rust uses LLVM as the compiler backend.
There are already a lot of embedded targets for Rust and a growing number of libraries. Some vendors have started adopting it with first-class support. Again, it's weird to make this claim.
> Nearly all of the relevant code is unsafe. There aren't many real benefits.
Unsafe sections do not make the entire usage of Rust unsafe. That's a common misconception from people who don't know much about Rust, but it's not like the unsafe keyword instantly obliterates any Rust advantages, or even all of its safety guarantees.
It's also strange to see this claim under an article about the kernel developers choosing to move forward with Rust.
> High Frequency Traders - They would care about the standard library except they are moving on from C++ to VHDL for their time-sensitive stuff. I would bet they move to a garbage-collected language for everything else, either Java or Go.
C++ and VHDL aren't interchangeable. They serve different purposes for different layers of the system. They aren't moving everything to FPGAs.
Betting on a garbage collected language is strange. Tail latencies matter a lot.
This entire comment is so weird and misinformed that I had to re-read it to make sure it wasn't satire or something.
> Memory safety applies to all memory. Not just heap allocated memory.
> Anyway, Rust has benefits beyond memory safety.
I want to elaborate on this a little bit. Rust uses some basic patterns to ensure memory safety. They are 1. RAII, 2. the move semantics, and 3. the borrow validation semantics.
This combination however, is useful for compile-time-verified management of any 'resource', not just heap memory. Think of 'resources' as something unique and useful, that you acquire when you need it and release/free it when you're done.
For regular applications, it can be heap memory allocations, file handles, sockets, resource locks, remote session objects, TCP connections, etc. For OS and embedded systems, that could be a device buffer, bus ownership, config objects, etc.
> > Nearly all of the relevant code is unsafe. There aren't many real benefits.
Yes. This part is a bit weird. The fundamental idea of 'unsafe' is to limit the scope of unsafe operations to as few lines as possible (The same concept can be expressed in different ways. So don't get offended if it doesn't match what you've heard earlier exactly.) Parts that go inside these unsafe blocks are surprisingly small in practice. An entire kernel isn't all unsafe by any measure.
> Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
Modern embedded isn't your grandpa's embedded anymore. Modern embedded chips have multiple KiB of ram, some even above 1MiB and have been like that for almost a decade (look at ESP32 for instance). I once worked on embedded projects based on ESP32 that used full C++, with allocators, exceptions, ... using SPI RAM and worked great. There's a fantastic port of ESP-IDF on Rust that Espressif themselves is maintaining nowadays, too.
JavaScript does not have a separation between & and &mut.
> The true future of "safe" software is already here, JavaScript.
only in interpreter mode.
Rust isn't that complex unless your pulling in magical macro libraries or dealing with things like pin and that stuff,which you really don't need to.
It's like saying python is complex becasue you have metaclasses, but you'll never need to reach for them.
> same Mike in an org wide email: thank you all for your support. Starting next week I will no longer be a developer here. I thank my manager blah blah... I will be starting my dream role as Architect and I hope to achieve success.
> Mike's colleagues: Aww.. We'll miss you.
> Mike's manager: Is this your one week's notice? Did you take up an architect job elsewhere immediately after I promoted you to architect ?!
Joke, enterprise edition..
I was confused but then noticed the actual headline of the submitted page: "The end of the kernel Rust experiment"
That was also the title of the HN post, before it was changed.
Kind of tells you something about how timidly and/or begrudgingly it’s been accepted.
IMO the attitude is warranted. There is no good that comes from having higher-level code than necessary at the kernel level. The dispute is whether the kernel needs to be more modern, but it should be about what is the best tool for the job. Forget the bells-and-whistles and answer this: does the use of Rust generate a result that is more performant and more efficient than the best result using C?
This isn’t about what people want to use because it’s a nice language for writing applications. The kernel is about making things work with minimum overhead.
By analogy, the Linux kernel historically has been a small shop mentored by a fine woodworker. Microsoft historically has been a corporation with a get-it-done attitude. Now we’re saying Linux should be a “let’s see what the group thinks- no they don’t like your old ways, and you don’t have the energy anymore to manage this” shop, which is sad, but that is the story everywhere now. This isn’t some 20th century revolution where hippies eating apples and doing drugs are creating video games and graphical operating systems, it’s just abandoning old ways because they don’t like them and think the new ways are good enough and are easier to manage and invite more people in than the old ways. That’s Microsoft creep.
The kind of Rust you would use in the kernel is no more high-level than C is.
Yeah, I don't know what the hell they are talking about.
I legit thought that rust is being removed.
Good meme!
This title is moderately clickbait-y and comes with a subtle implication that Rust might be getting removed from the kernel. IMO it should be changed to "Rust in the kernel is no longer experimental"
I absolutely understand the sentiment, but LWN is a second-to-none publication that on this rare occasion couldn't resist the joke, and also largely plays to an audience who will immediately understand that it's tongue-in-cheek.
Speaking as a subscriber of about two decades who perhaps wouldn't have a career without the enormous amount of high-quality education provided by LWN content, or at least a far lesser one: Let's forgive.
He didn't intend it as a joke and his intent matches the op's title revision request: https://lwn.net/Articles/1049840/
> on this rare occasion couldn't resist the joke
It was unintentional as per author
> Ouch. That is what I get for pushing something out during a meeting, I guess. That was not my point; the experiment is done, and it was a success. I meant no more than that.
The “Ouch.” was in reference to being compared to Phoronix.
Has anyone found them to be inaccurate, or fluffy to the point it degraded the content?
I haven’t - but then again, probably predominantly reading the best posts being shared on aggregators.
I don't know about the "ouch" but the rest of the comment seems pretty clear that they didn't intend to imply the clickbait.
Nah I used to read Phoronix and the articles are a bit clickbaity sometimes but mostly it's fine. The real issue is the reader comments. They're absolute trash.
Fair. But there’s even an additional difference between snarky clickbait and “giving the exact opposite impression of the truth in a headline” ;)
Hacker news generally removes Clickbait titles regardless of the provenance
If it was being removed the title would be "An update on rust in the kernel"
I think on HN, people generally want the submission's title to match the page's title.
(I do agree it's clickbait-y though)
Guidelines say fine to editorialize in these cases.
> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
https://news.ycombinator.com/newsguidelines.html
I think on HN, people waste too much time arguing about the phrasing of the headline, whether it is clickbait, etc. and not enough discussing the actual substance of the article.
I prefer improved titles. However, not in this case. It is rather irony, because LWN does not need click-bait.
This one is apparently a genuine mistake from the author. But I think we should leave it as it is. The confusion and the argument about it is interesting in itself.
It’s a bit clickbait-y, but the article is short, to the point, and frankly satisfying. If there is such a thing as good clickbait, then this might be it. Impressive work!
Might as well just post it:
This should just be the pinned comment.
Perhaps, except it can have the reverse effect. I was surprised, disappointed, and then almost moved on without clicking the link or the discussion. I'm glad I clicked. But good titles don't mislead! (To be fair, this one didn't mislead, but it was confusing at best.)
i agree and this matches the authors intent: https://lwn.net/Articles/1049840/
He didn't mean to! That said, the headline did make me look.
I'm having deja Vu. Was there another quite similar headline here a few weeks or so ago?
Seems you do not understand the concept of fun. You can learn that, makes your life easier.
Safety is good.
Unless it means sacrificing freedom.
Providing the default of additional safety with the ability to opt out that safety is pro-freedom.
freedom to shoot yourself in the foot?
how much?
That is why most of the world has not been using c/c++ for decades.
I'm not on the Rust bandwagon, but statements like this make absolutely no sense.
A lot of software was written in C and C++ because they were the only option for decades. If you couldn't afford garbage collection and needed direct control of the hardware there wasn't much of a choice. Had their been "safer" alternatives, it's possible those would have been used instead.
It's only been in the last few years we've seen languages emerge that could actually replace C/C++ with projects like Rust, Zig and Odin. I'm not saying they will, or they should, but just that we actually have alternatives now.
At least on PC world you could be using Delphi, for example, or Turbo Pascal before it.
Also I would refrain me to list all other alternatives.
One could rewrite curl with Perl 30 years ago. Or with Java, Golang, Python, you name it. Yet it stays written with C even today.
If you’re trying to demonstrate something about Rust by pointing out that someone chose C over Perl, I have to wonder how much you know about the positive characteristics of C. Let alone Rust.
Your comment comes across disingenuous to me. Writing it in, for example, Java would have limited it to situations where you have the JVM available, which is a minuscule subset of the situations that curl is used in today, especially if we're not talking "curl, the CLI tool" but libcurl. I have a feeling you know that already and mostly want to troll people. And Golang is only 16 years old according to Wikipedia, by the way.
As everybody knows not a single programming language was released between C++ and Rust.
That's not true when the topic is operating system kernels.
OS kernels? Everything from numpy to CUDA to NCCL is using C/C++ (doing all the behind the scene heavy lifting), never mind the classic systems software like web browsers, web servers, networking control plane (the list goes on).
Newer web servers have already moved away from C/C++.
Web browsers have been written in restricted subsets of C/C++ with significant additional tooling for decades at this point, and are already beginning to move to Rust.
There is not a single major browser written in Rust. Even Ladybird, a new project, adopted C++.
Firefox and Chrome already contain significant amounts of Rust code, and the proportion is increasing.
https://github.com/chromium/chromium : C++ 74.0%, Java 8.8%, Objective-C++ 4.8%, TypeScript 4.2%, HTML 2.5%, Python 2.4%, Other 3.3%
https://github.com/mozilla-firefox/firefox : JavaScript 28.9%, C++ 27.9%, HTML 21.8%, C 10.4%, Python 2.9%, Kotlin 2.7%, Other 5.4%
How significant?
According to https://4e6.github.io/firefox-lang-stats/, 12%.
I would love an updated version of https://docs.google.com/spreadsheets/d/1flUGg6Ut4bjtyWdyH_9e... (which stops in 2020).
For Chrome, I don't know if anyone has compiled the stats, but navigating from https://chromium.googlesource.com/chromium/src/+/refs/heads/... I see at least a bunch of vendored crates, so there's some use, which makes sense since in 2023 they announced that they would support it.
> Web browsers have been written in restricted subsets of C/C++ with significant additional tooling for decades at this point
So, written in C/C++? It seems to me you're trying to make a point that reality doesn't agree with but you stubbornly keep pushing it.
> So, written in C/C++?
Not in the sense that people who are advocating writing new code in C/C++ generally mean. If someone is advocating following the same development process as Chrome does, then that's a defensible position. But if someone is advocating developing in C/C++ without any feature restrictions or additional tooling and arguing "it's fine because Chrome uses C/C++", no, it isn't.
Most software development these days is JS/Typescript slop, popular doesn't equal better
You can write slop in any language. And good software for that matter.
I've never met anything written in JS/Typescript that I would call "well written".
I have. I personally really enjoy the recent crop of UI frameworks built for the web. Tools like Solidjs or Svelte.
Whatever your thoughts are about react, the JavaScript community that has been the tip of the spear for experimenting with new ways to program user interfaces. They’ve been at it for decades - ever since jQuery. The rest of the programming world are playing catchup. For example, SwiftUI.
VSCode is also a wonderful little IDE. Somehow much faster and more stable than XCode, despite being built on electron.
Bait used to be credible.
Most of the world uses other languages because they’re easier, not because they’re safer.
They're easier because, amongst other improvements, they are safer.
People didn't use seatbelts before seatbelts were invented.
And when they were mandated, it made a lot of people very angry!
https://www.cbc.ca/player/play/video/1.3649589
Oh dear can you imagine the crushing complexity of a future Rust kernel.
By most accounts the Rust4Linux project has made the kernel less complex by forcing some technical debt to be addressed and bad APIs to be improved.
This made me quite curious, is there a list somewhere of what bad APIs have been removed/improved and/or technical debt that's been addressed? Or if not a list, some notable examples?
The Linux kernel is already very complex, and I expect that replacing much or all of it with Rust code will be good for making it more tractable to understand. Because you can represent complex states with more sophisticated types than in C, if nothing else.
Complexity of Rust is just codifying existing complexity.
I've been working on Rust bindings for a C SDK recently, and the Rust wrapper code was far more complex than the C code it wrapped. I ended up ceding and getting reasonable wrappers by limiting how it can be used, instead of moddeling the C API's full capabilities. There are certainly sound, reasonable models of memory ownership that are difficult or impossible to express with Rust's ownership model.
Sure, a different model that was easy to model would have been picked if we were initially using Rust, but we were not, and the existing model in C is what we need to wrap. Also, a more Rust-friendly model would have incured higher memory costs.
I totally hear you about C friendly APIs not making sense in rust. GTK struggles with this - I tried to make a native UI in rust a few months ago using gtk and it was horrible.
> Also, a more Rust-friendly model would have incured higher memory costs.
Can you give some details? This hasn’t been my experience.
I find in general rust’s ownership model pushes me toward designs where all my structs are in a strict tree, which is very efficient in memory since everything is packed in memory. In comparison, most C++ APIs I’ve used make a nest of objects with pointers going everywhere. And this style is less memory efficient, and less performant because of page faults.
C has access to a couple tricks safe rust is missing. But on the flip side, C’s lack of generics means lots of programs roll their own hash tables, array lists and various other tools. And they’re often either dynamically typed (and horribly inefficient as a result) or they do macro tricks with type parameters - and that’s ugly as sin. A lack of generics and monomorphization means C programs usually have slightly smaller binaries. But they often don’t run as fast. That’s often a bad trade on modern computers.
Which SDK? I've only written Rust FFI to pretty basic C APIs. I'm curious to get a sense of the limitations on something more complex
Do you have any quantitative measures to prove there would be an increase in complexity?
There’s one already — they seem to be doing decently well.
https://www.redox-os.org/
No thanks, it's under MIT
Rust in the kernel is a self-fulfilling prophecy.
I guess it's time to finally try FreeBSD.
Why? Are you allergic to running rust code? You know, you can’t actually tell when you execute a binary?
I don’t like programming in Go, but nothing stops me running go programs on my computer. My computer doesn’t seem to care.
Lua in the kernel!
BSD development mostly works through the classic approach of pretending that if you write C code and then stare at it really really hard it'll be correct.
I'm not actually sure if they've gotten as far as having unit tests.
(OpenBSD adds the additional step of adding a zillion security mitigations that were revealed to them in a dream and don't actually do anything.)
They don't do anything yet they segfault some C turd from 'The computational beauty of Nature's. You have no clue about any BSD and it looks. Every BSD has nothing to do which each other.
They had me in the first half of the article, not gonna lie. I thought they resigned because Rust was so complicated and unsuitable, but it's the opposite
Why do Rust developers create so much drama everywhere? "Rewrite everything to Rust", "Rust is better than X" in all threads related to other languages, "how do you know it's safe if you don't use Holy Glorious Rust".
I'm genuinely not trolling, and Rust is okay, but only Rust acolytes exhibit this behaviour.
Linux has been headed in the wrong direction for some time. This just confirms the trend.
Completely tangential but it would be nice if could 'natively' run wasm.
You can do this: https://gist.github.com/jprendes/64d2625912d6b67383c02422727... (one of the solutions, there's a few better approaches)
I’m curious how they’ll manage long term safety without the guarantees Rust brought. That tradeoff won’t age well.
You may want to read the article.
(Spoiler alert: Rust graduated to a full, first-class part of the kernel; it isn't being removed.)
It's late, but I'm having a hell of a time parsing this. Could you explain what you meant?
I think they read the title but not the article and assumed Rust was being removed, and (rightfully, if that were true) opined that that was a shortsighted decision.
(Of course, that's not what's happening at all.)
Ah! I hadn't considered that, thanks. That makes way more sense - having read the article I couldn't figure out what was being said here otherwise.
Rust in the kernel feels like a red herring. For fault tolerance and security, wouldn’t it be a superior solution to migrate Linux to a microkernel architecture? That way, drivers and various other components could be isolated in sandboxes.
I am not a system programmer but, from my understanding, Torvalds has expressed strong opinions about microkernels over a long period of time. The concept looks cleaner on paper but the complexity simply outweighs all the potential benefits. The debate, from what I have followed, expressed similar themes as monolithic vs microservices in the wider software development arena.
I'm not a kernel developer myself, but I’m aware of the Tanenbaum/Torvalds debates in the early 90’s. My understanding is the primary reason Linus gave Tanenbaum for the monolithic design was performance, but I would think in 2025 this isn’t so relevant anymore.
And thanks for attempting to answer my question without snark or down voting. Usually HN is much better for discussion than this.
Linus holds many opinions chiefly based on 90's experience, many not relevant any more. So it goes.
>"My understanding is the primary reason Linus gave Tanenbaum for the monolithic design was performance, but I would think in 2025 this isn’t so relevant anymore."
I think that unlike user level software performance of a kernel is of utmost importance.
Microkernel architecture doesn't magically eliminate bugs, it just replaces a subset of kernel panics with app crashes. Bugs will be there, they will keep impacting users, they will need to be fixed.
Rust would still help to eliminate those bugs.
I agree it doesn’t magically eliminate bugs, and I don’t think rearchitecting the existing Linux kernel would be a fruitful direction regardless. That said, OS services split out into apps with more limited access can still provide a meaningful security barrier in the event of a crash. As it stands, a full kernel-space RCE is game over for a Linux system.
Then you can try the existing microkernels (e.g. Minix) or a Rust microkernel (Redox). You already have what you wish for.
Go ahead and do it!
Just use MINIX, or GNU Hurd.
Even better, be like the Amish, and build your own as much as you can.
You should develop a small experimental kernel with that architecture and publish it on a mailing list.
This is great because it means someday (possibly soon) Linux development will slowly grind to a halt and become unmaintainable, so we can start from scratch and write a new kernel.
Or you can take this as a sign that the linux kernel adapts modern programming languages so that more programmers can contribute :)
> the linux kernel adapts modern programming languages so that more programmers can contribute :)
I'm eagerly awaiting the day the Linux kernel is rewritten in Typescript so that more programmers can contribute :)
You can start from scratch and write a new kernel now! Nothing's stopping you.
Ideally in Rust.