Interesting project. I'm curious about the limits of formal verification of worst case execution time. There are other formally verified kernels like seL4 and atmosphere, as well as layers you can stack on top to get a mostly compatible posix-ish layer like genode. You can also go out and find completely compatible kernels with enough maturity that (full) formal verification isn't a major value-add, like QNX or VxWorks.
I'm not aware of much that combines WCET + formal verification + POSIX compatibility though. The verification page here is mostly at stone level, which from my understanding of SPARK terminology just means it passes validation, but might have runtime errors where most of Ada's WCET nondeterminism comes from. I'm skeptical that this is actually production usable for the hard real-time use cases all over their documentation at the current stage, but nothing on the website gives any clue as to the actual maturity short of reading the code myself.
Any government can get RCE on any OS with the change in their couch. Formal verification of process isolation is REALLY important when lives depend on it. That's a huge value add!
My main concern is speed and the lack of capability based security. seL4 is faster than Linux by a mile and I'm guessing that this is much slower. You can put a POSIX layer on seL4 but POSIX is inherently flawed too. MAC separates privileges from code and is too clunky to use in practice (see seLinux).
> Any government can get RCE on any OS with the change in their couch.
Do you really believe that? That seems extremely implausible based on just simple observations like all governments using COTS OS for military/intelligence work or standard OS:es being used for critical infrastructure like power/water/finance/transportation.
If your statement was even remotely true then why is this not used in conflicts to devastating effect?
The publicly available exploit prices put a browser zero day at $200k-$500k. That's the same cost as firing a few Javalin missiles. OS RCE runs into $1-$2 million. Much less than a cheap Russian tank. [1]
The cost of internally developed exploits is probably much lower. They aren't one shot assets either, they can be used until someone plugs the hole.
There are private companies selling devices to law enforcement that can extract information from locked phones [2]. Availability of that sort of access to anyone's phone by local law enforcement is absurdly cheap.
Those are (mostly) not RCE, and are for consumer devices configured in a default way.
---
The parent stated that "Any government can get RCE on any OS with the change in their couch."
That implies that Kiribati currently could easily buy RCE on for example hardened Linux or OpenBSD running the most sensitive infra in the world. I just don't buy that, since if it was true any current conflict would look much different.
Of course there are security holes and major fuckups do happen, but not at the scale the parent implied.
These prices are consistent (actually more costly) than public bounties by (now defunct) western based exploit brokers and manufacturer bounties.
> Those are (mostly) not RCE, and are for consumer devices configured in a default way.
I'm more worried about activists and journalists in developing counties without the financial means to afford flagship phones. But even Google can't manage to keep out a pedestrian mid sized security outfit selling to the cops and the FBI.
When activists lobbying for a fucking sugar tax in Mexico get hacked, then the bar is too fucking low.
Let's not talk about the nightmare that is old networking equipment or IoT devices.
> Any government can get RCE on any OS with the change in their couch
If you were extremely hyperbolic for effect that's fine, that's why I asked if you actually believed that, but what you are saying now is not at all arguing the same point.
I was not being hyperbolic: a couple million dollars is very cheap for virtually any military. Both exploit broker bounties and corporate bug bounties are in that range.
That is inanely pedantic. The municipal government of Monowi, Nebraska probably can not buy a RCE in any OS as they only govern a single person. That is also utterly meaningless to argue as it bears no effect on the core thrust of the argument that COTS operating systems in use by military and critical infrastructure are easily and cheaply hackable by potential adversaries. They are demonstrably grossly inadequate for purpose.
All my questions where with the assumption of a country-level government. I asked why, if this is so cheap, common and easy we do not see it used more.
Even if we said that we restrict it to for example the G20 I still don't think they can easily and cheaply "RCE any OS".
We do see it! Do you not remember the Snowden leaks?
Shit hasn't changed much. We still have monolithic kernels written in portable assembly. Linus still doesn't tag bug fixes with potential security impacts as such because he is more worried about unpatched consumer garbage (which compromise all low end phones). When your mitigation for such problems is to not make it obvious, then your OS is not safe enough in safety critical settings (which includes consumer devices).
Process isolation would downgrade the vast majority of critical Linux CVEs to availability bugs (crash a server but not compromise it).
Just because governments don't need to reach for RCE everytime doesn't mean that it is safe. Th fact that such bugs are so cheap is an indication that your safety margin is too thin.
This shouldn't be downvoted because it's stating facts. RCEs for critical infrastructure/OSes are very rare, they don't just grow on trees. I agree that OP exaggerated by saying that any government can buy whatever RCE they want and get access to any system they want, like buying candy in a candy shop. That's not reality.
Thankfully, there are regulatory regimes that require physically segregated systems for most cars, airplanes, power stations, etc
However, safety critical is not limited to cars: it also includes the phones of activities and journalists living under authoritarian regimes.
Monolithic kernels written in portable assembly mean that such bugs DO grow on trees [1] and the lack backporting means they just drop to the ground: the poor are sold phones that may never receive a security update. So even sugar tax activists in Mexico are the target of spyware!
We have seen the sophistication of these attacks ramp up as cryptocurrency has made them profitable and the North Koreans have made a killing exploiting these bugs.
Maybe you are right and it is very difficult to find these bugs but that just means low demand is what is keeping the price down. But that's probably because there enough LPEs and known RCEs that they are not needed most of the time.
You are claiming that every major OS is unhackable by governments. Can you point to literally any specific system that is demonstrably unhackable? Can you find literally anybody who would publicly claim their systems are unhackable by governments? Can you find literally anybody who would publicly claim that no competent team of 5 working for 3 years full-time (~1 tank worth of dollars, not even a basic company, just 1 tank) could not breach their systems? And that is just demonstrating for a single vendor, let alone your claim that it is true for everybody.
Your proof is that it would be really bad if everything were horribly insecure therefore it must not be true. Proof by wishful thinking has never been a valid argument.
In contrast, a few years ago I worked with a vulnerability broker who had literally hundreds of unsold zero-days with tens in each major commercial OS with zero-click RCEs only being a few million each. That is just one vendor in a sea of vulnerability brokers. That is the state of reality. We just live in the metaphorical equivalent of the pre-9/11 world where you can easily kill a lot of people by flying a plane into a building, but nobody has figured it out yet.
Yes you did, you said: "all governments using COTS OS for military/intelligence work" and then argued: "If your statement was even remotely true then why is this not used in conflicts to devastating effect?". You are clearly arguing that the operating systems they use, which you clearly admit are standard COTS operating systems, must be unhackable by other governments otherwise we would be seeing devastating effects (or at least require more than pocket change to a potential US adversary to attack, i.e. at least more than a single tank (~10 M$), at least more than a single fighter jet (~100 M$), probably at least more than a aircraft carrier (~1 G$) before not being pocket change).
No, he didn't. Learn to discuss properly. OP stated that any government could get RCE for any OS. And that is highly unlikely, since budget above market rates does not imply that you can easily get RCEs. The market rates are high because there is scarcity of such vulnerabilites.
Governments using COTS operating systems does not imply that these systems are unackable. If the statement of OP would be true, we would just see constant exploitation of RCE zero days, or at the least the impact of that. But that is not the case.
We do see constant exploitation of government and critical infrastructure systems. The US telecom network is literally actively compromised right now and has been for multiple years [1]. Like wishful thinking, ignorance is also not a valid argument.
It is frankly baffling that I even need to argue that COTS operating systems are easily hacked by governments and commercial hackers. It literally happens every day and not a single one of those companies or organizations even attempts to claim that they can protect against such threats. Government actors are literally what these companies peddling substandard security use to argue "nothing we could do". It has been literal decades of people trying to make systems secure against government actors and failing time and time again with no evidence of success.
I mean, seriously, go to Defcon and say that nobody there with a team of 5 people with 3 years (~10 M$, a single tank) could breach your commercially useful and functional Linux or Windows deployment and you are putting up a 10 M$ bounty to prove it. I guarantee they will laugh at you and then you will get your shit kicked in.
I am aware. I was making a concrete example pointing at a well known conference where average industry professionals would find the very concept of these systems being secure to be laughable.
Somehow we have ended up in this bizarro land where everybody in software knows software, especially COTS operating systems, is horribly insecure due to the endless embarrassing failures yet somehow they also doublethink these systems must be secure.
I was agreeing with you! It's a drinking game because the infosec field is laughable. Who needs a zero day RCE when the president is using an EOL Samsung?
> why is this not used in conflicts to devastating effect?
The systems with devastating impact are air-gapped. They're designed, audited, validated and then never touched again. Ports are disabled by cutting the traces on the motherboard and adding tamper protection to the case, which is in a secure facility protected by vetted people with guns, who are in a security facility protected by different vetted people with guns.
No system is perfect, but the time and effort is better spent on the generic case that the military understands well.
> The systems with devastating impact are air-gapped.
You wish. More often than not the people building these think they are very clever by using their bullet proof fire walls rather than a physical disconnect. Or SLIP over a serial port because for some reason serial ports are fine.
I've seen this kind of crap in practice in systems that should be airgapped, that they said were airgapped but that in fact were not airgapped.
And a WiFi connection even though it goes 'through the air' is not an airgap.
The same for BT and any other kind of connectivity.
An airgap is only an airgap if you need physical access to a device to be able to import or export bits using a physical connection, and the location of the device is secured by physical barriers. Preferably a building that is secure against non-military wannabe intruders.
> firewall exception to get to the air gapped system
Any system accessible with a firewall exception is not "air-gapped" by definition.
A level below that is diode networks, which are not air-gapped but provide much stronger system isolation than anything that is accessible with a "firewall exception".
Far below either of these is vanilla network isolation, which is what you seem to be talking about.
Definitely! I've worked on the design of these types of systems, there is more subtlety to the security models than people assume. Some of the designs in the wild have what I would consider to be notable weaknesses.
The most interesting subset of these systems are high-assurance bi-directional data paths between independent peers that are quasi-realtime. Both parties are simultaneously worried about infiltration and exfiltration. While obviously a misnomer, many people still call them diodes...
The entire domain is fascinating and less developed than you would think.
And even if you do get it right, there is always that one guy that takes a USB stick and plugs it into your carefully air-gapped systems. And cell modems are everywhere now, and so small even an expert could still overlook one, especially if it is dormant most of the time.
Yes, it is underfunded for sure. I have been underwhelmed by what academia has managed to produce, funding aside. It is a solvable problem but you have to give the money to the people that can solve it in an operational context, which rarely seems to happen.
It is a genuinely fun project for someone with sufficiently sophisticated skill but I suspect there is relatively little money in it, which colors the opportunity and outcomes.
The absence of clear commercial opportunity gives the domain a weird dynamic.
It really hasn't to the scale that you imply. Why hasn't ukraine and russia both used this to completely shut down each others infrastructure? Why isn't russia just hacking all the ukrainian COTS drones? Why hasn't anyone hacked a nuclear power plant?
There is power in restricting access and air gapping helps a lot. A drone (for example) can fall back to basic cryptography to limit access.
Air gapping is a baseline requirement in most safety critical systems. Nuclear power plants in particular have lots of redundant layers of safety. AFAIK Russia hasn't physically tried to cause a meltdown, presumably due to the political blow back (although they have attacked Chernobyl's sarcophagus). I assume this limits their digital espionage attacks too.
We do get glimpses of the use of such malware, like when Saudi Arabia hacked Jeff Bezos' phone. But we don't hear about most of it because there is a benefit to keeping a hack secret, so as to keep access.
Finally, it's usually cheaper to social engineer someone into loading a PowerPoint presentation and doing a local privilege escalation. They burn those for things as petty as getting embarrassing political information.
I doubt that most critical systems are air gapped. Even if there are, most part of Russians economy is not, but is still using IT based on COTS systems. Why wouldn't the Ukraine DoS or compromise the whole non air-gapped IT infrastructure of Russia to hit the economy if they could have easy access to RCE just because they are a government?
I mean, they do all the time. The value is generally in keeping access, however, and operational security and access control is helpful. You can knock a system out but then you just get kicked out and have to start over.
> Do you really believe that? That seems extremely implausible based on just simple observations like all governments using COTS OS for military/intelligence work or standard OS:es being used for critical infrastructure like power/water/finance/transportation.
I do, but have a slightly different take: even though COTS software is pretty much unilaterally full of bugs that will be exploitable and could be found, it is still possible to compose layers of security that compliment each other in such a way that a compromise in any one layer wouldn't mean game over. Done very carefully, I think you can make a stack vastly more secure than the sum of its parts. Moreover, it's very possible to make exploiting the software both more annoying and easier to detect, which would dissuade attempting to use exploits.
> If your statement was even remotely true then why is this not used in conflicts to devastating effect?
I think the costs, risks and incentives need to line up properly to actually see things play out. Even though software exploits in COTS software is relatively cheap by government money standards, they do still take time and money. Not to mention the actual software exploit part may not even be the most expensive or complicated part of an operation, especially if you desperately need to evade detection for a long time, and especially if your adversary is going to have sufficient auditing to know something is wrong early.
Stuxnet is old, but surely one of the most fascinating uses of malware in geopolitics. But wow, the amount of work involved and knowledge needed to make something like that happen makes the exploit part feel rather small.
Formally verified software seems to have a lot of promise, then, to make deep exploits even more convoluted, expensive and rare. Surely there will still be bugs, but it leaves a lot less room for error, and very well could shift the calculus on security threats a bit.
Stuxnet targeted the specific PLCs used at Iranian nuclear facilities, and had to be able to function in an airgapped environment. I reckon the logistics were far and away more complicated than finding Windows exploits, especially at that time.
Let me help a bit by trying to explain the situation. If you produce something that is a million lines of code you will most likely have at least a few hundred to a few thousand bugs in there. Some of those cause crashes, some of them cause hangs, and a small percentage will cause you to increase your privileges. Combine enough of those and sooner or later you end up with RCE. The problem is that you as a defender don't necessarily have the same budget to audit the code and to close it all down to the degree that an attacker has.
You need to do an absolutely perfect job in always spotting those RCE capable issues before an attacker does. And given the numbers involved this becomes a game of statistics: if there are 200 ways to get RCE on OS 'X' then you need to find and fix all of them before attackers do. Meanwhile, your system isn't a million lines but a multitude of that, there are your applications to consider (usually of a lesser quality than the OS), the risk of a purposeful insertion of a backdoor and so on.
So I don't think it is unreasonable to presume that any OS that is out there most likely has at least a couple of these that are kept 'on ice'.
I work in security. I know all of the above. But the parent said that "any government can by RCE on any OS", that is not at all the same as saying that it is plausible that a few of the more advanced countries probably have a few critical exploits "on ice". They also stated it as a fact, not as a possibility.
it is used here n there but unlike bullets the attacks if they remain unknown have no armer to defend against them, but are single use.
since the 2010s atleast more than 140 countries spend over 10 mil a year on purly offensive cyber. most of those countries spend astronomical amounts more than that. that includes purchase of attack tools and exploits
Note: IPC performance isn't the only factor in overall OS performance. Especially for a "traditional microkernel", where programs are split up into separate processes liberally, performance degrades due to the sheer number of cross-boundary interactions. A whole system is performant if the design of the whole system, not just the design of the kernel, is aligned with performance. This is not to put down seL4; on the other hand, it continues the trend of L4 microkernels demonstrating the viability of stricter designs. But keep in mind that more time and effort is necessary to implement larger systems well.
I'm bullish on capabilities too, but I don't know much about MAC. Can you explain your last sentence?
seL4 has the lowest IPC overhead of any kernel and it's an order of magnitude faster than Linux [1]. But you are correct: switching cost amounts of noise when architectured correctly. LionsOS [2] (which is based on seL4) has some benchmarks showing improved performance over Linux [3].
I am betting you know what mandatory access control is ; ). They basically amount to a firewall that is placed on applications restricting what they can do. The rules are generally written by downstream distros and are divorced from the implementation. The problem is that it's hidden control flow, so the program just dies and can't fall back gracefully. Capability oriented APIs make broker processes and narrowing of permissions tractable.
through what exactly people mean with it is often vague
Like e.g. both seLinux and AppAmore are technically MAC but people tend to only mention seLinux when speaking about how cumbersome it is and treat AppAmore as something different as it's not so cumbersome.
That really depends on what formal verification means in context. I don't see any interesting specifications in the repo, just basic stuff like this MD5 spec [0] that doesn't verify whether the MD5 implementation is correct. This is one of the areas where formal verification is listed as completed/gold level.
It's common for the interesting OS proofs to take more code than the kernel itself. Take a look at the seL4 proofs [1], or those for CertiKOS as examples.
If you're actually interested in alternative OSes in memory safe languages, you might like TockOS, which is more production-ready at this point. Formal verification work of the isolation model is still ongoing, because it's difficult work.
There is an NDA related company called ironclad as well. Beware the trademark/copyright terrorists.
That said, I am huge fan of works like this. But in practice, the security layer that betrays all of this tends to be the firmware layer.
My dream is to have something like the Framework computer use verifiably secure EFI firmware, as well as similarly verified and audited firmware for every hardware component.
You might want to check out MNT Research if you haven’t yet. They make repairable laptops, too, but they also release their work as free software and open hardware.
That isn't how trademarks work. There can be multiple business with the same name, as long as they operate in a different field. Case in point, Apple Computer had to pay for the rights to The Beatles label Apple Music only when they entered the music industry (not that they didn't try to contest it!)
I think wary would have been a better word, but I really did mean "weary", as in I would find the ordeal tiresome or bothersome? I wouldn't disagree if you said that's bad grammar still.
Such a case would never end up in court. You can't sue someone for doing something that's perfectly legal.. well you can try, but it's going to be really hard to find a lawyer willing to waste their time (a lawyer you're going to have to pay).. and the case would ultimately get thrown out long before court.
Interesting. Ada is in the greater Wirthian family (it's Pascal-like), and until now, the only Unix-like kernel I was aware of in a Wirthian language was TUNIS:
> the only Unix-like kernel I was aware of in a Wirthian language was TUNIS
SPIN developed at the University of Washington in the nineties was written in Modula-3; it was a microkernel-based system and supported the Digital UNIX system call interface, allowing Unix applications to run. There was also Sol implemented at INRIA in a Pascal dialect in the eighties which offered a Unix-compatible environment; it was followed by Chorus (initially written in Pascal), also a microkernel-based system, compatible with Unix APIs.
I wouldn’t kneecap a OS project I wish to be adopted by licensing it GPL. Look at glibc which basically can’t practically support static linking.
You make any of your OS standard libraries GPL and they need to suck to use and can’t statically link your code without being forced to also be licensed GPL.
WRT kneecapping, history has shown that companies will bleed the commons dry and they need to be legally strong-armed into contributing back to the free software projects they make their fortunes off of.
Virality might suit the ego, but it doesn't make for a healthy project when its primary users are parasitic.
> history has shown that companies will bleed the commons dry and they need to be legally strong-armed into contributing back to the free software projects they make their fortunes off of.
Software is not a scarce good. Let companies use free software without contributing back as much as they wish; it doesn't affect others in the least. There is no bleeding of the commons here, because even if companies take as much as they can without giving back, it doesn't reduce the resources available for others.
Software is rarely finished, and development has real costs.
When that development gets silo'ed away in proprietary systems, that is potential development lost upstream. If that happens enough, upstream becomes starved and anemic, and with forks only living on in silos.
Apple, for example, has made trillions of dollars off of FreeBSD. To this day, FreeBSD still does not have a modern WiFi or Bluetooth stack.
Meanwhile, AMD, Intel, Microsoft, and even Apple, etc have full-time engineering roles and teams dedicated to upstreaming their improvements to Linux. And there are paid engineers at these companies that ensure WiFi and Bluetooth work on Linux.
Companies do worse than bleeding of the commons: lock down weak-licensed software and lock in users and devices. It totally reduces users ability to benefit from FOSS and reduces funding for developers.
Rust's technical choices seem to make releasing GPL software with it cumbersome and unattractive. Also the implied goal of a lot of Rust projects is to replace GPL'ed programs with permissive ones.
Which technical choices are thinking of here? My best guess is the crates ecosystem and the oft discussed ‘dependency hell’ that pervasive package manager usage seems to engender. Is there something else I’m missing contributing to the (maybe purposeful) reluctance to push GPL code?
The most important effort is seL4[0], the fastest OS kernel out there which also happens to be the most formally verified.
LionsOS[1] is its static scenario building framework, with some dynamic scenario support.
Genode[2] is an independent OS construction kit that can also use the seL4 kernel. Their general purpose OS, Sculpt, just had a very interesting multi-kernel release[3].
The systems group at ETHZürich is building Kirsch[4], an effort with seL4 and CHERI.
Managarm[5] is also building something of interesting architecture with some Linux software compatibility.
Note: IPC performance isn't the only factor in overall OS performance. Especially for a "traditional microkernel", where programs are split up into separate processes liberally, performance degrades due to the sheer number of cross-boundary interactions. A whole system is performant if the design of the whole system, not just the design of the kernel, is aligned with performance. This is not to put down seL4; on the other hand, it continues the trend of L4 microkernels demonstrating the viability of stricter designs. But keep in mind that more time and effort is necessary to implement larger systems well.
Do not miss the latest seL4 summit's state of seL4 talk by Gernot Heiser[0], which besides providing an update on the work done this year, goes into performance[1].
This is real world throughput and latency seL4 is crushing Linux on, not some synthetic IPC benchmark.
I think I've got the gist now. Although I think Gernot Heiser doesn't consider the following to be ideal, I think it's fair to say that true claims have undergone some sensationalization. I don't think people generally lie when they say their product has achieved some impressive performance, but those results exist in the context they are taken under. In the embedded roles LionsOS is being targeted for, I have no doubt that they represent a real improvement over existing Linux systems, and probably any Linux system short of a magical one. However, in a general-purpose OS (which is what I focus on), which is the same as saying that many distinct user bases are simultaneously involved, the kernel is far from being the only load-bearing component. Also note that the functionality compared is not 1:1, nor is Linux the final contender of monolithic systems.
Something I want to explore, and which has some viability in the LionsOS model too, is that a general-purpose system may still liberally cut out unused functionality if highly modular and easily configurable. Like Legos.
In conclusion, props to the people at Trustworthy Systems as always, but it's safe to say that the OS field is still far from settled. My best compliment to seL4 is that it has raised the bar and simultaneously paved the way for future generations of advances. It's a seminal work that was desperately needed.
I will check those out tomorrow, but in the meantime: I don't mean to say that a microkernel-based system is necessarily worse on performance. However, I think a highly optimized monolithic system will probably always be somewhat faster than a highly optimized microkernel-based system. And note that the seL4 system is probably less mature, and that I have many criticisms of Linux in being a supposedly highly optimized system. I'm all for microkernels. I'm planning to write one myself. But there are some aspects that microkernel-based systems have to work harder on.
I have ideas as well, and wrote about some of them (including some partial specifications), although I do not have a name for my own, so due to this, there is not a repository or anything like that yet. Note that, there are multiple parts, and different projects will have a different set of these parts: hardware, kernel, user/application programs; my ideas involve all three (there may be other parts, and different ways to divide them, too).
From their verification roadmap, it sure seems generous to refer to this as “formally verified”. They don’t prove anything important about the kernel clearly at all. Seems very disingenuous to describe it as they do since it lacks any of the merits of other formally verified kernels like seL4 and Tock.
I haven’t fully given up on the hope that a fully verified kernel eventually catches on. It would be basically impossible to verify all of Linux at this point, but I could see seL4 eventually getting traction in something like the smartphone market.
Yeah that's what I was getting at. I know seL4 is used in a bunch of places, but outside of a few hobbyist projects I have never heard of anyone using is at a "full" OS.
It would be nearly impossible to have the support for the extremely diverse set of hardware that desktop Linux has while staying formally verified, but for something a bit more constrained like a smartphone, I think something like seL4 could work as a base and the manufacturer could write their own drivers for whatever hardware is needed.
I mean, how cool would it be if every single part of the stack that is even possible to verify was fully verified. I know about the halting problem, I know there are things that would be basically impossible to verify fully, but I still think it would be cool to live in a world where software engineers actually had a little assurance what they were doing actually worked before unleashing into the world.
I know at least one autonomous vehicle company is using it as their base OS in the autonomy stack, with efforts at extending some form of verification up to the high level code.
seL4 is being used in many places that we know about[0] and then there's those we don't or are still in the future, where we can only guess based on e.g. seL4 membership[1].
What about all the github links above that "ask about pricing"?
Commercial support is not free, and the pricing for that is almost always something you have to ask for. Hard to see how this is piece of free software stands out.
What I like most about ironclad is that it is fully posix-compliant, meaning that you can run a lot of UNIX programs on it already, like what "Gloire" does: https://github.com/Ironclad-Project/Gloire
There were a bunch of ports, like arm64, but with several bugs. So the maintainer removed all but x86_64, and then another group added risc64.
From there arm64 can be tried again
A couple weeks ago I was curious what the strictest programming language was. ChatGPT listed a couple, and it kicked off a short discussion where I began asking it about the capabilities of stricter programming languages at low levels. Funny enough at the end it mentioned that SPARK/Ada was the strictest you could get at the lowest levels, same as Ironclad.
At one point while asking it about drivers, it said "ACL2’s logic is [...] side‑effect‑free definitions with termination proofs when admitted to the logic. That is misaligned with effectful, interrupt‑driven kernel code.
I'm not an OS or kernel dev, most of my work has been in Web Dev, ML, and a little bit of embedded. How accurate is the information that was presented to me? Here is the link to the discussion: https://chatgpt.com/share/691012a7-a06c-800f-9cc9-54a7c2c8b6...
I don't know SPARK or Ada, but it just bothers me to think that we can't...I guess...prove everything about our software before we run it (yes yes, I'm familiar with halting problem shenanigans, but other than that).
There's a lot to unpack here. You can always make a stricter programming language by having your compiler error on everything.
Lisps are perfectly usable for system level code as well. There was an entire lineage of Lisp Machines where virtually all OS code was written in lisp. Those probably could have used ACL2 had it existed.
There's an X-Y component to your questions though. The strictness of the programming language just isn't the main goal for OS formal verification. It just makes certain things easier. What's important is having executable semantics of the programming language, having a machine model, and a behavioral specification. All of these are typically written in proof languages, and the kernel code is proven to implement the behavior spec according to the machine and language semantics.
SPARK gives you executable semantics, but so do C (specifically the Clight subset), most of the Lisps, Rust, and many others. You're benefiting from certain errors being impossible in SPARK, but it's not a fundamentally different process.
Can we use Rust / Go / Java (GraalVM Native) or Flutter Linux to build an executable that runs on an OS with an Ironclad kernel? Or is there special treatment that makes it incompatible with "plain" Linux exe?
Ironclad has a POSIX API, so yes, in principle it should be possible. There is an OS on top of Ironclad (https://github.com/Ironclad-Project/Gloire) which uses GNU tools.
Ironclad isn't a microkernel, is it? At least the device drivers, file system and network stack seem to run in kernel space. I didn't find any strong architectural hints pointing to microkernel design in the implementation so far.
As far as I know formal verification is another testing method and as as such it's is as good as the quality and the extent of the "verification" (aka tests).
Best way to verify that I know of is Fuzzing + testing.
It is a method where a computer verifies a proof that the program adheres to its specification for _all_ inputs (subject to whatever limitations the particular method has).
Types are the simplest kind of formal verification, and with sufficiently advanced dependent type-systems, can be used to prove that programs obey arbitrarily complex specifications. However, this can be extremely laborious and requires significantly different skills than normal programming, so it is very rarely done in industry
Don't tell me you don't know this service. It's extremely useful to get to know the code and architecture of a project. You can even ask questions about the code and get the relevant references to the corresponding sections in the source files.
Okay but that's not a wiki. Wiki implies pages can be collaboratively edited and linked. Otherwise it's not a wiki, it's just a website. The only collaboration here is to bait people into becoming maintainers and fix the hallucinated content.
More than one maintainer has shown frustration at that site making up wrong documentation already.
It's just the name of the service, as imagined by its creator, Cognition Labs; it's called a "wiki" because it creates wiki-style documentation similar to Wikipedia's format. There are always people who complain about everything. I'm successfully using the service since a few month and even applied it to my own compiler projects, and I think it's pretty good; of course there are errors, but from my experience far less than the (mostly outdated, if available at all) design documentation you usually find for open-source projects.
Interesting project. I'm curious about the limits of formal verification of worst case execution time. There are other formally verified kernels like seL4 and atmosphere, as well as layers you can stack on top to get a mostly compatible posix-ish layer like genode. You can also go out and find completely compatible kernels with enough maturity that (full) formal verification isn't a major value-add, like QNX or VxWorks.
I'm not aware of much that combines WCET + formal verification + POSIX compatibility though. The verification page here is mostly at stone level, which from my understanding of SPARK terminology just means it passes validation, but might have runtime errors where most of Ada's WCET nondeterminism comes from. I'm skeptical that this is actually production usable for the hard real-time use cases all over their documentation at the current stage, but nothing on the website gives any clue as to the actual maturity short of reading the code myself.
Any government can get RCE on any OS with the change in their couch. Formal verification of process isolation is REALLY important when lives depend on it. That's a huge value add!
My main concern is speed and the lack of capability based security. seL4 is faster than Linux by a mile and I'm guessing that this is much slower. You can put a POSIX layer on seL4 but POSIX is inherently flawed too. MAC separates privileges from code and is too clunky to use in practice (see seLinux).
> Any government can get RCE on any OS with the change in their couch.
Do you really believe that? That seems extremely implausible based on just simple observations like all governments using COTS OS for military/intelligence work or standard OS:es being used for critical infrastructure like power/water/finance/transportation.
If your statement was even remotely true then why is this not used in conflicts to devastating effect?
The publicly available exploit prices put a browser zero day at $200k-$500k. That's the same cost as firing a few Javalin missiles. OS RCE runs into $1-$2 million. Much less than a cheap Russian tank. [1]
The cost of internally developed exploits is probably much lower. They aren't one shot assets either, they can be used until someone plugs the hole.
There are private companies selling devices to law enforcement that can extract information from locked phones [2]. Availability of that sort of access to anyone's phone by local law enforcement is absurdly cheap.
[1]: https://opzero.ru/en/prices/
[2]: https://arstechnica.com/gadgets/2025/10/leaker-reveals-which...
> [1]: https://opzero.ru/en/prices/
Those are the prices that they are buying for, they do not indicate at all that these are common or how large the market is for RCE on any OS.
> [2]: https://arstechnica.com/gadgets/2025/10/leaker-reveals-which...
Those are (mostly) not RCE, and are for consumer devices configured in a default way.
---
The parent stated that "Any government can get RCE on any OS with the change in their couch."
That implies that Kiribati currently could easily buy RCE on for example hardened Linux or OpenBSD running the most sensitive infra in the world. I just don't buy that, since if it was true any current conflict would look much different.
Of course there are security holes and major fuckups do happen, but not at the scale the parent implied.
These prices are consistent (actually more costly) than public bounties by (now defunct) western based exploit brokers and manufacturer bounties.
> Those are (mostly) not RCE, and are for consumer devices configured in a default way.
I'm more worried about activists and journalists in developing counties without the financial means to afford flagship phones. But even Google can't manage to keep out a pedestrian mid sized security outfit selling to the cops and the FBI.
When activists lobbying for a fucking sugar tax in Mexico get hacked, then the bar is too fucking low.
Let's not talk about the nightmare that is old networking equipment or IoT devices.
Come on, you said:
> Any government can get RCE on any OS with the change in their couch
If you were extremely hyperbolic for effect that's fine, that's why I asked if you actually believed that, but what you are saying now is not at all arguing the same point.
I was not being hyperbolic: a couple million dollars is very cheap for virtually any military. Both exploit broker bounties and corporate bug bounties are in that range.
What is your objection?
“Extremely hyperbolic”,
or relative?
$50k-$150k+ is a low-to-medium cost case to carry out for US law enforcement. or military.
Much like the $3 in change you could dig out of your couch or car to get a small drink or sandwich.
Nobody in this thread has provided anything that would lead me to believe that any government can easily buy RCE on any OS. Read the quote again:
> Any government can get RCE on any OS with the change in their couch
That is inanely pedantic. The municipal government of Monowi, Nebraska probably can not buy a RCE in any OS as they only govern a single person. That is also utterly meaningless to argue as it bears no effect on the core thrust of the argument that COTS operating systems in use by military and critical infrastructure are easily and cheaply hackable by potential adversaries. They are demonstrably grossly inadequate for purpose.
All my questions where with the assumption of a country-level government. I asked why, if this is so cheap, common and easy we do not see it used more.
Even if we said that we restrict it to for example the G20 I still don't think they can easily and cheaply "RCE any OS".
We do see it! Do you not remember the Snowden leaks?
Shit hasn't changed much. We still have monolithic kernels written in portable assembly. Linus still doesn't tag bug fixes with potential security impacts as such because he is more worried about unpatched consumer garbage (which compromise all low end phones). When your mitigation for such problems is to not make it obvious, then your OS is not safe enough in safety critical settings (which includes consumer devices).
Process isolation would downgrade the vast majority of critical Linux CVEs to availability bugs (crash a server but not compromise it).
Just because governments don't need to reach for RCE everytime doesn't mean that it is safe. Th fact that such bugs are so cheap is an indication that your safety margin is too thin.
“I don’t get told, it isn’t happening/possible.”
Hopefully this helps as succinctly as possible. Genuinely.
This shouldn't be downvoted because it's stating facts. RCEs for critical infrastructure/OSes are very rare, they don't just grow on trees. I agree that OP exaggerated by saying that any government can buy whatever RCE they want and get access to any system they want, like buying candy in a candy shop. That's not reality.
Thankfully, there are regulatory regimes that require physically segregated systems for most cars, airplanes, power stations, etc
However, safety critical is not limited to cars: it also includes the phones of activities and journalists living under authoritarian regimes.
Monolithic kernels written in portable assembly mean that such bugs DO grow on trees [1] and the lack backporting means they just drop to the ground: the poor are sold phones that may never receive a security update. So even sugar tax activists in Mexico are the target of spyware!
We have seen the sophistication of these attacks ramp up as cryptocurrency has made them profitable and the North Koreans have made a killing exploiting these bugs.
Maybe you are right and it is very difficult to find these bugs but that just means low demand is what is keeping the price down. But that's probably because there enough LPEs and known RCEs that they are not needed most of the time.
[1]: https://www.cvedetails.com/vulnerability-list/vendor_id-33/L...
Just because the market would buy something for X$, doesn't mean that you could buy that if you have more than X$.
Militaries have billion dollar budgets.
You are claiming that every major OS is unhackable by governments. Can you point to literally any specific system that is demonstrably unhackable? Can you find literally anybody who would publicly claim their systems are unhackable by governments? Can you find literally anybody who would publicly claim that no competent team of 5 working for 3 years full-time (~1 tank worth of dollars, not even a basic company, just 1 tank) could not breach their systems? And that is just demonstrating for a single vendor, let alone your claim that it is true for everybody.
Your proof is that it would be really bad if everything were horribly insecure therefore it must not be true. Proof by wishful thinking has never been a valid argument.
In contrast, a few years ago I worked with a vulnerability broker who had literally hundreds of unsold zero-days with tens in each major commercial OS with zero-click RCEs only being a few million each. That is just one vendor in a sea of vulnerability brokers. That is the state of reality. We just live in the metaphorical equivalent of the pre-9/11 world where you can easily kill a lot of people by flying a plane into a building, but nobody has figured it out yet.
> You are claiming that every major OS is unhackable by governments.
I did no such thing. I claimed that it's implausible that every government can buy RCE for every OS.
Yes you did, you said: "all governments using COTS OS for military/intelligence work" and then argued: "If your statement was even remotely true then why is this not used in conflicts to devastating effect?". You are clearly arguing that the operating systems they use, which you clearly admit are standard COTS operating systems, must be unhackable by other governments otherwise we would be seeing devastating effects (or at least require more than pocket change to a potential US adversary to attack, i.e. at least more than a single tank (~10 M$), at least more than a single fighter jet (~100 M$), probably at least more than a aircraft carrier (~1 G$) before not being pocket change).
No, he didn't. Learn to discuss properly. OP stated that any government could get RCE for any OS. And that is highly unlikely, since budget above market rates does not imply that you can easily get RCEs. The market rates are high because there is scarcity of such vulnerabilites.
Governments using COTS operating systems does not imply that these systems are unackable. If the statement of OP would be true, we would just see constant exploitation of RCE zero days, or at the least the impact of that. But that is not the case.
We do see constant exploitation of government and critical infrastructure systems. The US telecom network is literally actively compromised right now and has been for multiple years [1]. Like wishful thinking, ignorance is also not a valid argument.
It is frankly baffling that I even need to argue that COTS operating systems are easily hacked by governments and commercial hackers. It literally happens every day and not a single one of those companies or organizations even attempts to claim that they can protect against such threats. Government actors are literally what these companies peddling substandard security use to argue "nothing we could do". It has been literal decades of people trying to make systems secure against government actors and failing time and time again with no evidence of success.
I mean, seriously, go to Defcon and say that nobody there with a team of 5 people with 3 years (~10 M$, a single tank) could breach your commercially useful and functional Linux or Windows deployment and you are putting up a 10 M$ bounty to prove it. I guarantee they will laugh at you and then you will get your shit kicked in.
[1] https://en.wikipedia.org/wiki/Salt_Typhoon
Everything thinks of Defcon et al a a gathering of elite hackers. But it's more of a fucking drinking game.
The depressing fact is that you don't need an RCE to accomplish most goals.
I am aware. I was making a concrete example pointing at a well known conference where average industry professionals would find the very concept of these systems being secure to be laughable.
Somehow we have ended up in this bizarro land where everybody in software knows software, especially COTS operating systems, is horribly insecure due to the endless embarrassing failures yet somehow they also doublethink these systems must be secure.
I was agreeing with you! It's a drinking game because the infosec field is laughable. Who needs a zero day RCE when the president is using an EOL Samsung?
> why is this not used in conflicts to devastating effect?
The systems with devastating impact are air-gapped. They're designed, audited, validated and then never touched again. Ports are disabled by cutting the traces on the motherboard and adding tamper protection to the case, which is in a secure facility protected by vetted people with guns, who are in a security facility protected by different vetted people with guns.
No system is perfect, but the time and effort is better spent on the generic case that the military understands well.
> The systems with devastating impact are air-gapped.
You wish. More often than not the people building these think they are very clever by using their bullet proof fire walls rather than a physical disconnect. Or SLIP over a serial port because for some reason serial ports are fine.
I've seen this kind of crap in practice in systems that should be airgapped, that they said were airgapped but that in fact were not airgapped.
If I had a dollar for each time I was told that they would get me a firewall exception to get to the air gapped system...
It does make it much easier to do stuff but kinda defeats the purpose.
And a firewall is not an airgap.
And a WiFi connection even though it goes 'through the air' is not an airgap.
The same for BT and any other kind of connectivity.
An airgap is only an airgap if you need physical access to a device to be able to import or export bits using a physical connection, and the location of the device is secured by physical barriers. Preferably a building that is secure against non-military wannabe intruders.
> firewall exception to get to the air gapped system
Any system accessible with a firewall exception is not "air-gapped" by definition.
A level below that is diode networks, which are not air-gapped but provide much stronger system isolation than anything that is accessible with a "firewall exception".
Far below either of these is vanilla network isolation, which is what you seem to be talking about.
> Any system accessible with a firewall exception is not "air-gapped" by definition.
I completely agree. Maybe I should have put "air-gapped" in quotes.
Diode networks can be - and have been - used to exfiltrate data though.
Definitely! I've worked on the design of these types of systems, there is more subtlety to the security models than people assume. Some of the designs in the wild have what I would consider to be notable weaknesses.
The most interesting subset of these systems are high-assurance bi-directional data paths between independent peers that are quasi-realtime. Both parties are simultaneously worried about infiltration and exfiltration. While obviously a misnomer, many people still call them diodes...
The entire domain is fascinating and less developed than you would think.
And even if you do get it right, there is always that one guy that takes a USB stick and plugs it into your carefully air-gapped systems. And cell modems are everywhere now, and so small even an expert could still overlook one, especially if it is dormant most of the time.
It's in a proto state due to anemic academic funding. We need to throw cash at the problem.
Yes, it is underfunded for sure. I have been underwhelmed by what academia has managed to produce, funding aside. It is a solvable problem but you have to give the money to the people that can solve it in an operational context, which rarely seems to happen.
It is a genuinely fun project for someone with sufficiently sophisticated skill but I suspect there is relatively little money in it, which colors the opportunity and outcomes.
The absence of clear commercial opportunity gives the domain a weird dynamic.
While I can't talk to all the systems out there, I am talking about systems I have worked on.
Yes, I believe that.
> If your statement was even remotely true then why is this not used in conflicts to devastating effect?
It has been, it continues to be.
Where have you been?
It really hasn't to the scale that you imply. Why hasn't ukraine and russia both used this to completely shut down each others infrastructure? Why isn't russia just hacking all the ukrainian COTS drones? Why hasn't anyone hacked a nuclear power plant?
There is power in restricting access and air gapping helps a lot. A drone (for example) can fall back to basic cryptography to limit access.
Air gapping is a baseline requirement in most safety critical systems. Nuclear power plants in particular have lots of redundant layers of safety. AFAIK Russia hasn't physically tried to cause a meltdown, presumably due to the political blow back (although they have attacked Chernobyl's sarcophagus). I assume this limits their digital espionage attacks too.
We do get glimpses of the use of such malware, like when Saudi Arabia hacked Jeff Bezos' phone. But we don't hear about most of it because there is a benefit to keeping a hack secret, so as to keep access.
Finally, it's usually cheaper to social engineer someone into loading a PowerPoint presentation and doing a local privilege escalation. They burn those for things as petty as getting embarrassing political information.
I doubt that most critical systems are air gapped. Even if there are, most part of Russians economy is not, but is still using IT based on COTS systems. Why wouldn't the Ukraine DoS or compromise the whole non air-gapped IT infrastructure of Russia to hit the economy if they could have easy access to RCE just because they are a government?
I mean, they do all the time. The value is generally in keeping access, however, and operational security and access control is helpful. You can knock a system out but then you just get kicked out and have to start over.
Do you have any resources that go deeper into this? It's a fascinating frontier for war!
USA was providing Ukrainian operatives Russian officer locations via soldier's using their cellphones
https://oe.tradoc.army.mil/product/smart-phones-playing-prom...
> Do you really believe that? That seems extremely implausible based on just simple observations like all governments using COTS OS for military/intelligence work or standard OS:es being used for critical infrastructure like power/water/finance/transportation.
I do, but have a slightly different take: even though COTS software is pretty much unilaterally full of bugs that will be exploitable and could be found, it is still possible to compose layers of security that compliment each other in such a way that a compromise in any one layer wouldn't mean game over. Done very carefully, I think you can make a stack vastly more secure than the sum of its parts. Moreover, it's very possible to make exploiting the software both more annoying and easier to detect, which would dissuade attempting to use exploits.
> If your statement was even remotely true then why is this not used in conflicts to devastating effect?
I think the costs, risks and incentives need to line up properly to actually see things play out. Even though software exploits in COTS software is relatively cheap by government money standards, they do still take time and money. Not to mention the actual software exploit part may not even be the most expensive or complicated part of an operation, especially if you desperately need to evade detection for a long time, and especially if your adversary is going to have sufficient auditing to know something is wrong early.
Stuxnet is old, but surely one of the most fascinating uses of malware in geopolitics. But wow, the amount of work involved and knowledge needed to make something like that happen makes the exploit part feel rather small.
Formally verified software seems to have a lot of promise, then, to make deep exploits even more convoluted, expensive and rare. Surely there will still be bugs, but it leaves a lot less room for error, and very well could shift the calculus on security threats a bit.
Knowledge that humans plug shit into computers without knowing what it is?
Stuxnet targeted the specific PLCs used at Iranian nuclear facilities, and had to be able to function in an airgapped environment. I reckon the logistics were far and away more complicated than finding Windows exploits, especially at that time.
It's probably more impressive than that. Probably targeted a range of potential PLCs.
But what's all this have to do with the ongoing conversations about pwning Windows-based networks inside major consumer utility assets?
I responded to this thread of thought:
> Any government can get RCE on any OS with the change in their couch.
Mainly to agree with it. I believe it is still likely true.
Any resemblance to other discussion further up or down is unintentional.
In djb's course at UIUC, I recal he said that students were required to find a vulnerability as part of the course requirements.
Finding a vulnerability is not at all the same as "RCE on any OS". Vulnerabilities are common, the ones that have the impact implied are not.
Let me help a bit by trying to explain the situation. If you produce something that is a million lines of code you will most likely have at least a few hundred to a few thousand bugs in there. Some of those cause crashes, some of them cause hangs, and a small percentage will cause you to increase your privileges. Combine enough of those and sooner or later you end up with RCE. The problem is that you as a defender don't necessarily have the same budget to audit the code and to close it all down to the degree that an attacker has.
You need to do an absolutely perfect job in always spotting those RCE capable issues before an attacker does. And given the numbers involved this becomes a game of statistics: if there are 200 ways to get RCE on OS 'X' then you need to find and fix all of them before attackers do. Meanwhile, your system isn't a million lines but a multitude of that, there are your applications to consider (usually of a lesser quality than the OS), the risk of a purposeful insertion of a backdoor and so on.
So I don't think it is unreasonable to presume that any OS that is out there most likely has at least a couple of these that are kept 'on ice'.
I work in security. I know all of the above. But the parent said that "any government can by RCE on any OS", that is not at all the same as saying that it is plausible that a few of the more advanced countries probably have a few critical exploits "on ice". They also stated it as a fact, not as a possibility.
You are not arguing the same point.
1) Those things are being hardened right now
2) You haven’t seen a hot conflict yet
it is used here n there but unlike bullets the attacks if they remain unknown have no armer to defend against them, but are single use.
since the 2010s atleast more than 140 countries spend over 10 mil a year on purly offensive cyber. most of those countries spend astronomical amounts more than that. that includes purchase of attack tools and exploits
Note: IPC performance isn't the only factor in overall OS performance. Especially for a "traditional microkernel", where programs are split up into separate processes liberally, performance degrades due to the sheer number of cross-boundary interactions. A whole system is performant if the design of the whole system, not just the design of the kernel, is aligned with performance. This is not to put down seL4; on the other hand, it continues the trend of L4 microkernels demonstrating the viability of stricter designs. But keep in mind that more time and effort is necessary to implement larger systems well.
I'm bullish on capabilities too, but I don't know much about MAC. Can you explain your last sentence?
seL4 has the lowest IPC overhead of any kernel and it's an order of magnitude faster than Linux [1]. But you are correct: switching cost amounts of noise when architectured correctly. LionsOS [2] (which is based on seL4) has some benchmarks showing improved performance over Linux [3].
I am betting you know what mandatory access control is ; ). They basically amount to a firewall that is placed on applications restricting what they can do. The rules are generally written by downstream distros and are divorced from the implementation. The problem is that it's hidden control flow, so the program just dies and can't fall back gracefully. Capability oriented APIs make broker processes and narrowing of permissions tractable.
[1]: https://sel4.systems/performance.html
[2]: https://lionsos.org/
[3]: https://trustworthy.systems/publications/papers/Heiser_25%3A...
they mean "mandatory access controls (MAC)" https://en.wikipedia.org/wiki/Mandatory_access_control
through what exactly people mean with it is often vague
Like e.g. both seLinux and AppAmore are technically MAC but people tend to only mention seLinux when speaking about how cumbersome it is and treat AppAmore as something different as it's not so cumbersome.
It does not have to remain at stone level, and it can get legit certifications, too.
Looking forward to it. A formally verified OS is a great step towards better security.
That really depends on what formal verification means in context. I don't see any interesting specifications in the repo, just basic stuff like this MD5 spec [0] that doesn't verify whether the MD5 implementation is correct. This is one of the areas where formal verification is listed as completed/gold level.
It's common for the interesting OS proofs to take more code than the kernel itself. Take a look at the seL4 proofs [1], or those for CertiKOS as examples.
If you're actually interested in alternative OSes in memory safe languages, you might like TockOS, which is more production-ready at this point. Formal verification work of the isolation model is still ongoing, because it's difficult work.
[0] https://codeberg.org/Ironclad/Ironclad/src/branch/main/sourc...
[1] https://github.com/seL4/l4v
There is none in [0]. :P
There is an NDA related company called ironclad as well. Beware the trademark/copyright terrorists.
That said, I am huge fan of works like this. But in practice, the security layer that betrays all of this tends to be the firmware layer.
My dream is to have something like the Framework computer use verifiably secure EFI firmware, as well as similarly verified and audited firmware for every hardware component.
Ironclad is also the name of the chief cryptographic library for Common Lisp: https://github.com/sharplispers/ironclad/
You might want to check out MNT Research if you haven’t yet. They make repairable laptops, too, but they also release their work as free software and open hardware.
https://mnt.re/
The MNT is too small for my usage, but it's a great effort. I think their goal is to make open hardware right now, not necessarily a verifiable one.
That isn't how trademarks work. There can be multiple business with the same name, as long as they operate in a different field. Case in point, Apple Computer had to pay for the rights to The Beatles label Apple Music only when they entered the music industry (not that they didn't try to contest it!)
Copyright is something different entirely!
https://xkcd.com/386/
That make sense. I'd still be weary though, you can win in court, but the cost of getting sued isn't small. Nintendo's lawsuits come to mind.
Normally I wouldn't say anything, but since we're on the topic of mixing up two different concepts:
I suspect you meant to say "wary." Wary means "cautious," "weary" means "tired."
I think wary would have been a better word, but I really did mean "weary", as in I would find the ordeal tiresome or bothersome? I wouldn't disagree if you said that's bad grammar still.
Weary and wary are also homophones, in certain dialects at least
Such a case would never end up in court. You can't sue someone for doing something that's perfectly legal.. well you can try, but it's going to be really hard to find a lawyer willing to waste their time (a lawyer you're going to have to pay).. and the case would ultimately get thrown out long before court.
Check this out: https://www.suedbynintendo.com/
If a gaming company can sue a local supermarket over trademark, I don't know what to say.
You need a different kernel for firmware verification. But it should be regulated at this point.
Interesting. Ada is in the greater Wirthian family (it's Pascal-like), and until now, the only Unix-like kernel I was aware of in a Wirthian language was TUNIS:
https://en.wikipedia.org/wiki/TUNIS
It was implemented in Concurrent Euclid.
https://en.wikipedia.org/wiki/Concurrent_Euclid
> the only Unix-like kernel I was aware of in a Wirthian language was TUNIS
SPIN developed at the University of Washington in the nineties was written in Modula-3; it was a microkernel-based system and supported the Digital UNIX system call interface, allowing Unix applications to run. There was also Sol implemented at INRIA in a Pascal dialect in the eighties which offered a Unix-compatible environment; it was followed by Chorus (initially written in Pascal), also a microkernel-based system, compatible with Unix APIs.
Fascinating! Thank you!
Building new operating systems seems so ambitious to me. Radiant Computer (https://radiant.computer/) was also recently posted.
What other exciting projects like these exist?
https://asterinas.github.io/ (Linux compatible Kernel) and https://redox-os.org/ are two promising ones.
Asterinas looks cool, but they literally are involved with sustech, what a name for an organization!
I wonder why all of these do not use gpl2?
I wouldn’t kneecap a OS project I wish to be adopted by licensing it GPL. Look at glibc which basically can’t practically support static linking.
You make any of your OS standard libraries GPL and they need to suck to use and can’t statically link your code without being forced to also be licensed GPL.
That viral property some people find desirable.
WRT kneecapping, history has shown that companies will bleed the commons dry and they need to be legally strong-armed into contributing back to the free software projects they make their fortunes off of.
Virality might suit the ego, but it doesn't make for a healthy project when its primary users are parasitic.
> history has shown that companies will bleed the commons dry and they need to be legally strong-armed into contributing back to the free software projects they make their fortunes off of.
Software is not a scarce good. Let companies use free software without contributing back as much as they wish; it doesn't affect others in the least. There is no bleeding of the commons here, because even if companies take as much as they can without giving back, it doesn't reduce the resources available for others.
Software is rarely finished, and development has real costs.
When that development gets silo'ed away in proprietary systems, that is potential development lost upstream. If that happens enough, upstream becomes starved and anemic, and with forks only living on in silos.
Apple, for example, has made trillions of dollars off of FreeBSD. To this day, FreeBSD still does not have a modern WiFi or Bluetooth stack.
Meanwhile, AMD, Intel, Microsoft, and even Apple, etc have full-time engineering roles and teams dedicated to upstreaming their improvements to Linux. And there are paid engineers at these companies that ensure WiFi and Bluetooth work on Linux.
Companies do worse than bleeding of the commons: lock down weak-licensed software and lock in users and devices. It totally reduces users ability to benefit from FOSS and reduces funding for developers.
Isn't this what made Linux successful?
Being able to sell it closed and not releasing the source would make closing the android ecosystem 'good old times', no?
We would only get a bunch of closed outdated company controlled binaries, but now for everything, not only drivers?
Rust's technical choices seem to make releasing GPL software with it cumbersome and unattractive. Also the implied goal of a lot of Rust projects is to replace GPL'ed programs with permissive ones.
Which technical choices are thinking of here? My best guess is the crates ecosystem and the oft discussed ‘dependency hell’ that pervasive package manager usage seems to engender. Is there something else I’m missing contributing to the (maybe purposeful) reluctance to push GPL code?
> Also the implied goal of a lot of Rust projects is to replace GPL'ed programs with permissive ones.
People really got to stop with crazy nonsense.
[dead]
https://serenityos.org/
This looks perfect. Just wonder how the hardware /software support goes
[flagged]
[flagged]
Not new, but alternative https://www.haiku-os.org/
The most important effort is seL4[0], the fastest OS kernel out there which also happens to be the most formally verified.
LionsOS[1] is its static scenario building framework, with some dynamic scenario support.
Genode[2] is an independent OS construction kit that can also use the seL4 kernel. Their general purpose OS, Sculpt, just had a very interesting multi-kernel release[3].
The systems group at ETHZürich is building Kirsch[4], an effort with seL4 and CHERI.
Managarm[5] is also building something of interesting architecture with some Linux software compatibility.
0, https://sel4.systems/
1. https://trustworthy.systems/projects/LionsOS/
2. https://genode.org/
3. https://genodians.org/alex-ab/2025-11-02-sculpt-multi-kernel
4. https://sockeye.ethz.ch/kirsch/
5. https://managarm.org/
Note: IPC performance isn't the only factor in overall OS performance. Especially for a "traditional microkernel", where programs are split up into separate processes liberally, performance degrades due to the sheer number of cross-boundary interactions. A whole system is performant if the design of the whole system, not just the design of the kernel, is aligned with performance. This is not to put down seL4; on the other hand, it continues the trend of L4 microkernels demonstrating the viability of stricter designs. But keep in mind that more time and effort is necessary to implement larger systems well.
Do not miss the latest seL4 summit's state of seL4 talk by Gernot Heiser[0], which besides providing an update on the work done this year, goes into performance[1].
This is real world throughput and latency seL4 is crushing Linux on, not some synthetic IPC benchmark.
0. https://www.youtube.com/watch?v=wP48V34lDhk
1. https://youtu.be/wP48V34lDhk?t=1199
I think I've got the gist now. Although I think Gernot Heiser doesn't consider the following to be ideal, I think it's fair to say that true claims have undergone some sensationalization. I don't think people generally lie when they say their product has achieved some impressive performance, but those results exist in the context they are taken under. In the embedded roles LionsOS is being targeted for, I have no doubt that they represent a real improvement over existing Linux systems, and probably any Linux system short of a magical one. However, in a general-purpose OS (which is what I focus on), which is the same as saying that many distinct user bases are simultaneously involved, the kernel is far from being the only load-bearing component. Also note that the functionality compared is not 1:1, nor is Linux the final contender of monolithic systems.
Something I want to explore, and which has some viability in the LionsOS model too, is that a general-purpose system may still liberally cut out unused functionality if highly modular and easily configurable. Like Legos.
In conclusion, props to the people at Trustworthy Systems as always, but it's safe to say that the OS field is still far from settled. My best compliment to seL4 is that it has raised the bar and simultaneously paved the way for future generations of advances. It's a seminal work that was desperately needed.
I will check those out tomorrow, but in the meantime: I don't mean to say that a microkernel-based system is necessarily worse on performance. However, I think a highly optimized monolithic system will probably always be somewhat faster than a highly optimized microkernel-based system. And note that the seL4 system is probably less mature, and that I have many criticisms of Linux in being a supposedly highly optimized system. I'm all for microkernels. I'm planning to write one myself. But there are some aspects that microkernel-based systems have to work harder on.
I have ideas as well, and wrote about some of them (including some partial specifications), although I do not have a name for my own, so due to this, there is not a repository or anything like that yet. Note that, there are multiple parts, and different projects will have a different set of these parts: hardware, kernel, user/application programs; my ideas involve all three (there may be other parts, and different ways to divide them, too).
ReactOS continues to move forward! I know it's based on something extant and not net new, but it's still a new OS in my eyes.
https://reactos.org/blogs/
There is always Plan9
it seems to be little more than a mission statement... no?
From their verification roadmap, it sure seems generous to refer to this as “formally verified”. They don’t prove anything important about the kernel clearly at all. Seems very disingenuous to describe it as they do since it lacks any of the merits of other formally verified kernels like seL4 and Tock.
I haven’t fully given up on the hope that a fully verified kernel eventually catches on. It would be basically impossible to verify all of Linux at this point, but I could see seL4 eventually getting traction in something like the smartphone market.
A guy can dream, at least.
It has been used for a while in the Secure Enclave operating system: https://en.wikipedia.org/wiki/L4_microkernel_family#:~:text=...
But to my knowledge, not for the more general user facing OSes.
Yeah that's what I was getting at. I know seL4 is used in a bunch of places, but outside of a few hobbyist projects I have never heard of anyone using is at a "full" OS.
It would be nearly impossible to have the support for the extremely diverse set of hardware that desktop Linux has while staying formally verified, but for something a bit more constrained like a smartphone, I think something like seL4 could work as a base and the manufacturer could write their own drivers for whatever hardware is needed.
I mean, how cool would it be if every single part of the stack that is even possible to verify was fully verified. I know about the halting problem, I know there are things that would be basically impossible to verify fully, but I still think it would be cool to live in a world where software engineers actually had a little assurance what they were doing actually worked before unleashing into the world.
I know at least one autonomous vehicle company is using it as their base OS in the autonomy stack, with efforts at extending some form of verification up to the high level code.
That's cool as hell! I didn't know that but it makes me happy to see it getting a bit more love.
seL4 is being used in many places that we know about[0] and then there's those we don't or are still in the future, where we can only guess based on e.g. seL4 membership[1].
0. https://sel4.systems/use.html
1. https://sel4.systems/Foundation/Membership/
For typical end users, kernel on its own is useless. So this an example of OS which uses Ironclad kernel:
https://codeberg.org/Ironclad/Gloire
SPARK’s “ask about pricing” stickers indicate this is “free” software that’s a different kind of free.
What about all the github links above that "ask about pricing"?
Commercial support is not free, and the pricing for that is almost always something you have to ask for. Hard to see how this is piece of free software stands out.
That guy in Nebraska needs to buy food and shelter.
The MAC's are good: https://ironclad-os.org/manual/Mandatory-access-control-_002...
What I like most about ironclad is that it is fully posix-compliant, meaning that you can run a lot of UNIX programs on it already, like what "Gloire" does: https://github.com/Ironclad-Project/Gloire
> It is written in SPARK and Ada, and is comprised of 100% free software.
I thought SPARK was a paid (not free) license. Am I mistaken?
Very cool project btw.
> I thought SPARK was a paid (not free) license. Am I mistaken?
Similar model to Qt: permissive licensed open source version, with a commercial 'Pro' offering.
https://en.wikipedia.org/wiki/SPARK_(programming_language)
https://alire.ada.dev/transition_from_gnat_community.html
CuBit is another operating system in SPARK/Ada.
1. https://blog.adacore.com/cubit-a-general-purpose-operating-s...
2. https://github.com/docandrew/CuBit
Is there a technical reason it only supports x86_64, riscv64, and not arm64?
There were a bunch of ports, like arm64, but with several bugs. So the maintainer removed all but x86_64, and then another group added risc64. From there arm64 can be tried again
> Ironclad – formally verified, real-time capable, Unix-like OS kernel
What filesystems are supported ?
OK can someone smarter than me educate me?
A couple weeks ago I was curious what the strictest programming language was. ChatGPT listed a couple, and it kicked off a short discussion where I began asking it about the capabilities of stricter programming languages at low levels. Funny enough at the end it mentioned that SPARK/Ada was the strictest you could get at the lowest levels, same as Ironclad.
At one point while asking it about drivers, it said "ACL2’s logic is [...] side‑effect‑free definitions with termination proofs when admitted to the logic. That is misaligned with effectful, interrupt‑driven kernel code.
I'm not an OS or kernel dev, most of my work has been in Web Dev, ML, and a little bit of embedded. How accurate is the information that was presented to me? Here is the link to the discussion: https://chatgpt.com/share/691012a7-a06c-800f-9cc9-54a7c2c8b6...
I don't know SPARK or Ada, but it just bothers me to think that we can't...I guess...prove everything about our software before we run it (yes yes, I'm familiar with halting problem shenanigans, but other than that).
There's a lot to unpack here. You can always make a stricter programming language by having your compiler error on everything.
Lisps are perfectly usable for system level code as well. There was an entire lineage of Lisp Machines where virtually all OS code was written in lisp. Those probably could have used ACL2 had it existed.
There's an X-Y component to your questions though. The strictness of the programming language just isn't the main goal for OS formal verification. It just makes certain things easier. What's important is having executable semantics of the programming language, having a machine model, and a behavioral specification. All of these are typically written in proof languages, and the kernel code is proven to implement the behavior spec according to the machine and language semantics.
SPARK gives you executable semantics, but so do C (specifically the Clight subset), most of the Lisps, Rust, and many others. You're benefiting from certain errors being impossible in SPARK, but it's not a fundamentally different process.
(partially) formally verified says right on the page https://ironclad-os.org/formalverification.html
Can we use Rust / Go / Java (GraalVM Native) or Flutter Linux to build an executable that runs on an OS with an Ironclad kernel? Or is there special treatment that makes it incompatible with "plain" Linux exe?
Ironclad has a POSIX API, so yes, in principle it should be possible. There is an OS on top of Ironclad (https://github.com/Ironclad-Project/Gloire) which uses GNU tools.
So if I find a bug or security bug does this speak to the usefulness of the formal verification claim?
Yay! Take that, Minix and GNU Hurd! \o/
Ironclad isn't a microkernel, is it? At least the device drivers, file system and network stack seem to run in kernel space. I didn't find any strong architectural hints pointing to microkernel design in the implementation so far.
"Formally verified" what does that means?
As far as I know formal verification is another testing method and as as such it's is as good as the quality and the extent of the "verification" (aka tests).
Best way to verify that I know of is Fuzzing + testing.
Formal verification is explicitly NOT testing.
It is a method where a computer verifies a proof that the program adheres to its specification for _all_ inputs (subject to whatever limitations the particular method has).
Types are the simplest kind of formal verification, and with sufficiently advanced dependent type-systems, can be used to prove that programs obey arbitrarily complex specifications. However, this can be extremely laborious and requires significantly different skills than normal programming, so it is very rarely done in industry
Here is the deepwiki documentation: https://deepwiki.com/Ironclad-Project/Ironclad
Is that an AI-generated page pretending to be a wiki?
Don't tell me you don't know this service. It's extremely useful to get to know the code and architecture of a project. You can even ask questions about the code and get the relevant references to the corresponding sections in the source files.
Okay but that's not a wiki. Wiki implies pages can be collaboratively edited and linked. Otherwise it's not a wiki, it's just a website. The only collaboration here is to bait people into becoming maintainers and fix the hallucinated content.
More than one maintainer has shown frustration at that site making up wrong documentation already.
It's just the name of the service, as imagined by its creator, Cognition Labs; it's called a "wiki" because it creates wiki-style documentation similar to Wikipedia's format. There are always people who complain about everything. I'm successfully using the service since a few month and even applied it to my own compiler projects, and I think it's pretty good; of course there are errors, but from my experience far less than the (mostly outdated, if available at all) design documentation you usually find for open-source projects.