Not strictly related, but I was thinking about this the other day: how long will it be before some group or company creates a truly new operating system that takes off?
I mean, the windows kernel is going on 20 years old. the Mac OS kernel is based on unix, which is even older. Linux is also based on unix. As are Android and ios.
The more we add to these operating systems, the harder it becomes to walk away from them because we have so much invested in them.
Does this mean that in 200 years we'll still be using the descendants of these early operating systems? Under what circumstances would someone decide to start something truly new? And what would it take to ever reach a feature parity with the existing options?
And to be clear, I'm not saying there's a reason to walk away from these. I'm not an operating system programmer, I don't even really know that much about it. I'm just wondering if it will always make sense to just keep adding to what we already have.
There was a quote on slashdot long ago that has stuck with me:
> When I was walking into NEC a couple months ago with my good friend at Red Hat, I asked him why he worked at a Linux company. He told me, "Because it will be the last OS". It took me a while for that to really sink in -- but I think it has a strong chance at becoming true. Any major advances in security, compartmentability, portability, etc. will wind up in Linux. Even if they are developed in some subbranch or separate OS (QNX, Embedded, BSD), the features and code concepts could (and most likely will) find their way into Linux.
I think it's mostly true, and for me at least, Debian is the last distribution, because it's so well put together, IME. Same goes for Emacs, 'the 100 year editor'[0] and I've recently been getting into Common Lisp, 'the 100 year language'[1].
The Linux kernel has changed enough that you can't really say it's the same thing. Heck, even userland has changed substantially - who here remembers having to run MAKEDEV for userspace access to devices? But you can probably still find static binaries compiled in the late 1990's that will run on modern Linux. ABI wise, the Linux kernel is functionally backwards compatible, and that's nothing to idly dismiss. I think operating systems will keep morphing until someone makes a radical leap of progress that they can't adapt to.
That's not to say that there are no advances still to be made. But as many observed in the discussion on DevOps[2], much of the activity in the sphere of information technology looks like busy work and not progress.
> But you can probably still find static binaries compiled in the late 1990's that will run on modern Linux. ABI wise, the Linux kernel is functionally backwards compatible, and that's nothing to idly dismiss.
This would be one of my big objections to systemd - I seem to have gone from a very decoupled kernel userland (eg I can boot almost any media and then chroot into my system) to one where the kernel version and systemd are pretty tied together making things more difficult.
>But you can probably still find static binaries compiled in the late 1990's that will run on modern Linux.
On the otherhand it's random chance if a glibc-dep binary from a modern program compiled today will run on a linux install thats only 5 years old. And given the rapid addition of new compiler features it's getting to the point where GCC on a 5 year old install can't even compile a quarter of the programs written new today.
Containers outside of their useful server-side context, containers for desktop applications, are the fever of this future shock.
I think you are underestimating what AI will do to computer science once AI can reason about source code in a software system better. GPT-3 writing code from prose amplified by 1000x doesn't seem so far off. This will likely trigger sort of an Cambrian explosion of software systems.
I agree about Linux, but I disagree about Debian. It's too easy to create a new distro which is mostly compatible with everything. But to create a new OS, you are expecting everyone to rewrite all their software for the now OS.
As for common lisp, it's not even popular now, so I wouldn't expect it to suddenly become popular in 100 years.
It's more likely for these systems to slowly evolve into something new rather than being replaced entirely.
MacOS and Linux, at some level, are already very, very different than earlier unix systems. The BSD's are more conservative, but there's still systems being reworked across all of them (HAMMER in Dragonfly, pledge/unveil in openbsd, etc.). The ideas of unix will probably be with us forever, but the precise details of implementation are transient.
I suppose one could imagine something related to dataflow architectures, for example. But history still suggests betting on something that is an offshoot/evolution of basically all the mainstream operating systems we have today rather than a radical ground-up rethink.
The driver problem is even worse now than it was 20 years ago, in that there is quite a bit of hardware that we expect to work as a baseline these days. Each day things like FreeBSD become less and less of an option for general use, due to a lack of drivers. For example, it's a poor choice for newer laptops, as it doesn't have 802.11ac support.
New kernels and OSes that show up are at a massive disadvantage compared to Linux because of this. I think that an OS that prioritizes being able to borrow drivers from Linux or one of the BSDs would give itself an advantage.
The security model of today's desktop OSes is pretty lacking. There's no reason every application you run should have all of your authority right from the start, but generally right now they do. (Sandboxing can improve the situation but getting sandboxes right without restricting the user too much is difficult.)
A personally fascinating situation that doesn’t get talked about much:
1. App is sandboxed, and has to ask for access to every bit of your information (photos, contacts, etc)
2. App asks at a time when it seems reasonable (I’m taking a photo and need to save it)
3. Now app has the ability to exfiltrate everything you just gave it access to (like all photos), and it now has that ability (in the majority of cases) without on-device oversight
There's been talk about having the file picker dialog reside outside the specific app sandbox, and the app only receiving (revocable) capability to act on whatever the user authorized.
That is, on file save the app would be handed essentially an open, writable and closeable, file descriptor -- the app might not even know the name of the file it's writing to.
On file open, the app might be handed an open, read-only, file descriptor.
Making such a mechanism usable for both the end user and the app developer gets complicated for sure, but the idea of not just permanently allowing "read+write ~/Photos forever" is definitely out there.
Right, so there are few layers to this, and lots of them involve the system not following the principle of least authority. For example:
- If you're just taking a photo and need to save it, why does the app need access to all your photos? Surely an append-only capability would be sufficient.
- This depends on the app, but if you're just taking a photo why does the app need internet access? If the app is a typical camera app, it sure doesn't - you might often want to pass the data to an app that does (via e.g. a share sheet) but in general the camera app itself has no need to reach out to the internet. (And if it does, does it really need to be unrestricted access to the internet?)
- Why is it so easy for apps to request access to everything and so hard for the user to say "no, actually you only get to see this"? (iOS has been improving this lately but it's still a pretty rare feature.)
But yes also as you allude to, it's not obvious to the user what access a program has after it's been granted.
The latest iOS lets you grant access to only a limited subset of photos (and I believe separates that from write access). It's still annoying enough that you'll eventually grant access to everything though (i.e. the second time you use the photo picker it doesn't re-prompt for permissions, it just shows the one photo you already granted access to).
I think the answer to that lies in the past: what made us walk away from the OSes of the 70s/80s and move to new things? And in some cases, what made us stay?
Application inertia is hard to overcome. You really need a killer feature and developer adoption to have a chance.
I’d take exception with the idea that internet/web did a better job of defining interfaces. The web equivalent of Unix (i.e. something we’ll probably never move from, and will haunt us forever) is Javascript.
I think it's different because internet is literally interconnecting devices, some of which might not even be running an OS per say.
Whereas Apple can release a new device on new OS with some custom hardware/drivers and as long as the internet + web parts are compatible with those interfaces there is no issue for customers.
Maybe you're right though, but also isn't that what Posix was meant for?
Also for things like Wi-Fi, they define those standards but the underlying vendor implementation is totally custom and can get really ugly still, just like an OS.
The true benefit of hard interfaces is enforcing modularity, and the benefit of modularity is being able to upgrade a system. (At minimum, emulate & migrate over time)
Even "relatively" simple monolithic systems are nightmarishly complex to change.
Apple doesn't really give a shit about backwards compatibility, past the bare minimum to keep their platform devs from revolting.
POSIX, originally, was intended to provide a solution to every-vendor's-Unix problem. A nice side effect of it was absolutely that we got (mostly) standard interfaces.
I’d argue that Apple’s graphics-first approach did that.
OSX set them back in that sense (yes; and also ahead. Don’t @ me), but I personally think there are some current opportunities around making graphics-first (and task-first) operating systems.
I think some of that depends on what you count as "new" - if you're going to hold "inspired by" or "based on" against it, then there will probably never be anything new under the sun. This is both because, indeed, throwing away all existing standards/programs/drivers/interfaces requires an absurd investment and makes adoption unlikely, and because at some level it has all been done before - there just aren't that many ways to coherently store data, and files+databases+tagging turns out to cover pretty much everything.
> how long will it be before some group or company creates a truly new operating system that takes off?
You won't get another Windows / Linux / BSD. They do their jobs very well, and as WSL emulating Linux proves they are near interchangeable. I have no doubt Linux could emulate the Windows kernel near perfectly too if we had the Windows source to see what is required. When you've already go a slew of interchangeable parts, we create another one?
If something new is to displace them, it's going to have to do something very different. The only thing I can see is not a replacement, but a security kernel that allows us to establish a trust chain from the hardware to some application. It wasn't necessary when the hardware sits on your lap or desk, but now we are tending to rent out CPU cycles from some machine on the other side of the planet and yet still want to have some assurance of privacy, they are becoming kinda essential. There are a number of proprietary ones out there now, but I think that's doomed to fail. No one in the right mind would put their faith in a binary that could be in cahoots with anybody, from the USA government to a ransomware gang.
You can have totally different models based on the same kernels. Android (Linux), Chromium OS (Linux including bits of Gentoo), and iOS (Darwin / XNU / Mach and BSD) all come to mind as very different OSes from traditional UNIX systems in terms of architecture, especially with regard to package management, updates, user accounts and access control, sharing files or making IPCs across applications, resource management, etc.
The kernels are a small part in the overall picture. Userspace apps today are abstracted under layers upon layers of runtimes and libraries, to the point your interaction with the OS is minimal if any.
There’s simply no incentive to upend the current order; and to do that, you either need to control your hardware stack completely or get buy-in from vendors to develop drivers.
Mainstream OS platforms (on desktop, server, mobile) are driven by path dependency, inertia and network effect. Embedded side is slightly less constrained but heavily weighed by those as well. Whether it would technically make sense to start a new OS has only peripheral relevance unless there are giant, overwhelming advantages (and few disadvantages) vs incumbent ones.
Any credible attempt will have to be designed from the start to focus on comatibility with whatever it replaces, have huge investments to the ecosystem transition from the corporate owner, have credible commitment to the phaseout of the superseded OS, and be prepared for a very long transition period.
Will be interesting to see what happens with Fuchsia. Its success might be a big loss to open computing though.
By open computing I meant platforms that are user controllable and support general purpouse computing (insead of walled gardens).
Android has been steering away from that position even though Google does still choose to publish AOSP as code drops. Being based on Linux has definitely helped to keep open a while longer. It feels likely that Fuchsia would in the best case have a similar role as Darwin.
I have a feeling that quantum computing will require this change. Right now it's just proprietary tools to write and load software, but it will likely one day become a full OS, with a different paradigm than today.
It's not harder to walk away from them now than it was 20 years ago, probably easier now with so many things written in cross platform/web app format. I can't really agree with you. If someone comes up with a new OS that has some OMG 10X advantage (due to software or some new leap in technology) over current ones then it will "win" the OS war. I just don't share your concerns.
Here's my thoughts. They're not too well developed here to be honest. I'm probably wrong about some of this and need to research more.
The "new thing after Unix" is already here and has been here for some time. It's called Hurd. It depends on a microkernel who's only job is to pass messages from different components, but microkernels don't work well (at least not better than monolithic kernels) on x86 due to context switching overhead.
x86's maintain their architecture due to software compatibility. That's what we're really stuck on. A lot of improvements have been bolted on to x86, like cache, all the instruction stream acrobatics, SIMD, VMs, long mode, etc. but it's still "heavy" ; things are done to make it look fundamentally the same to existing programs.
When CPUs start becoming cheap little tiny things set in a superfast fabric to talk and cooperate with potentially thousands of other cores (like GPUs) as fast as they can chew local instructions, and all the stuff with I/O is worked out, we're ready to take the next step. You can see a little bit of the future with things like the Cell BE, but it needs to be much higher scale to change what OS is dominant.
Unix itself is ~20 years older than Linux's start.
NextStep implemented Unix atop a microkernel for security benefit.
Hurd splits all the subsystems of Unix into independent facilities that are connected via the microkernel. One of them can crash and not affect the rest of the system, but also I got the idea reading about Hurd that there's no particular requirement one subsystem lives on a specific CPU or even machine as long as they can communicate.
> but microkernels don't work well (at least not better than monolithic kernels)
In terms of performance, isn't "at least not better" (meaning, as good as) sufficient for performance? Because the point of microkernels afaik aren't about better performance, but security (edit - and robustness).
> on x86 due to context switching overhead
how high is the extra overhead on hurd over a more conventional OS?
I mean, the windows kernel is going on 20 years old. the Mac OS kernel is based on unix, which is even older. Linux is also based on unix. As are Android and ios.
The more we add to these operating systems, the harder it becomes to walk away from them because we have so much invested in them.
Does this mean that in 200 years we'll still be using the descendants of these early operating systems? Under what circumstances would someone decide to start something truly new? And what would it take to ever reach a feature parity with the existing options?
And to be clear, I'm not saying there's a reason to walk away from these. I'm not an operating system programmer, I don't even really know that much about it. I'm just wondering if it will always make sense to just keep adding to what we already have.