What is sad is that even though Linux now has hardware support that is miles ahead of Windows, we've exchanged one problem with another, because nowadays most of the hardware I see is only supported on Linux and nothing else.
Even on PCs, latest generation AMD graphics cards (already >1yr old) are not supported in _anything_ other than Linux (and Windows). This is just sad.
FreeBSD uses a compatibility layer to run the Linux graphics drivers, though it lags behind Linux. So if FreeBSD currently does not support the graphics cards, it will soon. It looks like they are currently porting over 6.11: https://github.com/FreeBSDFoundation/proj-laptop/issues/41
It is the _only_ OSS operating system that supports AMD cards from this decade, and it does so by having to emulate the Linux kernel API, and yet _still_ it lags years behind Linux itself. I've chosen this example for a reason -- this is exactly what I'm sad about.
Not to sound like goal post mover, but once you tried 802.11be @ 6GHz, you never go back (assuming what your AP connected to can handle it).
That's my problem with FreeBSD on non-servers - eventually it's supported, usually via Linux shim, but it's too late. By the time FreeBSD started to support (on CURRENT) GPU that forced me to switch, I already upgraded twice.
eh I've got 802.11be @ 6ghz at home (u7 pro AP and QCNCM865 client) and it is truly impressive, but I only notice the difference once every 1-2 years when I'm transferring a full-disk image over the network for backup. We've long passed the point where the speed improvements matter for daily usage (browsing the web, streaming video, remote desktop, installing updates). For those use cases, you wouldn't notice a slowdown on 802.11ac and I'd argue even 802.11n would be fine.
The one exception I can think of would be video content creators since they end up with large amounts of raw video that would benefit from transferring at much-faster-than-streaming speeds.
And I guess steam downloads if you don't plan ahead at all, but if I'm planning to play a game later, I'll tell steam to install it hours or days in advance.
I often have to transfer large files in LAN. 802.11ac and 802.11n definitely enough for most of the time. I often have to transfer large files between machines in LAN, with 802.11ac makes you remember that it's wireless, while 802.11be makes you forget.
I have the same setup for a framework main board next to the AP, and it's reliably faster than using their usb-c ethernet extension card.
I wouldn’t mind faster wifi speed, but reverse engineering stuff has always been slow (i.e Asahi Linux, and they only have a few device to investigate)
Windows is cleaning up a lot of legacy drivers. A bunch of printers (+ scanners) that predate updates to the printer driver framework in recent versions of windows just don't have functioning drivers anymore, despite being perfectly functional.
All these devices work out of the box on linux, more or less.
Most devices that you can buy for under $400 now run on ARM chips (frequently Mediatek). We're talking tablets (with keyboards), convertibles, even outright laptops (i.e. "netbooks"). These things qualify as computers. They are replacing traditional laptops, just as those replaced desktops.
If we're looking at sub-$400 computers, especially on ARM, it seems like we have to include the large segment of ChromeOS devices that only run Linux out of the box (or at all, generally).
Referring to Intel Chromebooks (i.e. laptops), that segment is now dwindling in size much as its predecessor (Intel Windows netbooks) did a few years ago. Most low-end ChromeOS devices now run on ARM. And Android is nipping at their heels.
Netbooks were originally Linux. MSFT created a special licensing class just to try to undercut it. It wasn't great, but because Windows and Microsoft licensing, it quickly took off. People realized Windows on netbooks sucks, thought that meant that netbooks sucked, and eventually netbooks died. Until, arguably, ChromeOS arrived.
Sure. And all of those devices run Linux. Some of them even run other Linux OSs decently; one of my daily drivers is an ARM Chromebook running postmarketos.
It is not trivial to get FOSS Linux onto a write-protected Intel Chromebook, compared to a Windows netbook of yore. It is harder still to get it onto an ARM Chromebook or Android tablet. PostmarketOS is a bit simpler (or at least better documented) but it is not a full Linux distro.
Installing a fully-fledged FOSS OS on low-end general-purpose computing hardware is getting harder. Certainly for the non-techies who have to be part of FOSS if it is to survive.
I think it's better and worse in slightly different ways. On the one hand, yeah a Chromebook won't let you touch the default OS without switching to developer mode, and won't let you install a new OS without disabling the write-protect screw or firmware option. On the other hand, every ChromeOS device allows you to do exactly that, and then you can run whatever you want and you should have at least some support for upstream Linux because ChromeOS upstreams their drivers. (I will happily agree that the Android device situation is awful.)
> PostmarketOS is a bit simpler (or at least better documented) but it is not a full Linux distro.
By what definition is PMOS not a "full" distro? It's Alpine plus some extra stuff, including device tweaks and out-of-the-box desktop environments.
Most devices in that class I see run some vendor flavor of Android or ChromeOS and not Windows, so definitionally speaking they do run Linux out of the box.
Yes but it's a bit academic. The problem is that getting a FOSS distro of Linux onto low-end general-purpose computing hardware is harder now than it was a decade ago. I speak from bitter recent experience.
Oh, I know perfectly well what you mean. The move to the SoC paradigm has serious implications for the future of computing freedom. I can't imagine how we might be able to fight this crap, realistically.
There is only one area where windows excels: recent PC-based hardware. For everything else, primarily including anything that is not PC, and anything that is from more than a decade and a half or so ago, Linux is miles ahead, and there's no discussion possible.
Whether this is any helpful to us is another story.
I think of it more in the reverse, the choice being removed is the hardware you can use. It has been the case from the dawn of computing that you start from a usecase (which correlates to software, which maps to an operating system) and then look at your options for hardware. The more specific your usecase, the more specific your software, which correlates to a specific choice of hardware. There is no, and can be no, "have it all". It's a fundamental principle of mathematics, the postulates you choose radically change the set of proofs you have access to, and the set of proofs you choose entail the axioms and structures you can take.
Now it can be better or worse, and right now it's never been better. There was a time when your language, your shell and your operating system were specific to the exact model of computer (not just a patched kernel, everything was fully bespoke) and you have a very limited set of peripherals. That we suffer from more esoteric operating systems lagging behind the bleeding edge of extremely complicated peripherals is a very good place to be in. That there's always room for improvement shouldn't be cause for sadness.
> Now it can be better or worse, and right now it's never been better. There was a time when your language, your shell and your operating system were specific to the exact model of comput
No, it is not. There was a small period of time between the 90s and the 2010s where you could grab almost every 386 OS and have your hardware mostly decently run for it, and if not, drivers would be easily written from manufacturer specifications. That time was definitely better then than what we have today, or what we had before then. I am writing this as someone who was written serial port controller drivers for the BeOS.
> That we suffer from more esoteric operating systems lagging behind the bleeding edge of extremely complicated peripherals is a very good place to be in.
This is the wrong logic, because operating systems become esoteric since they can't support the hardware, and hardware becomes complicated and under-specified because there's basically only one operating system to take care of. You may _think_ you have no reason to be sad if you're a user of Windows or Linux, but you have plenty anyway.
> There was a small period of time between the 90s and the 2010s where you could grab almost every 386 OS and have your hardware mostly decently run for it
And prior to that, you could grab every OS running on IBM clones and not have to worry about graphics drivers at all, because graphics acceleration wasn't a thing. The era you refer to had already introduced software contingency on hardware within x86. This disparity was further compounded in the mid-2010s as GPUs exploded in complexity and their drivers screamed into tens of millions of lines of code, eclipsing kernels themselves. This is not distinguishable from the introduction of graphics drivers in any generalized manner. They were driven by the same process.
An important thing I want to point out as well; you're doing a lot of heavy lifting by limiting the pool to x86 computers, which is already giving up and admitting to a very strong restriction of hardware choice. Don't take that as pedantry, it's a very well hidden assumption that you've accidentally overlooked, or in the case that you think it's irrelevant, I'm letting you know that I don't consider it irrelevant in the slightest. When I think of computers, I'm not just thinking of x86 PCs. In the 90s I'm thinking of SGI workstations, Acorns, Amiga, Macs. I'm thinking of mainframes and supercomputers and everything else.
> This is the wrong logic, because operating systems become esoteric since they can't support the hardware
On the contrary, I assure you that this logic rests on faulty premises. As a general principle it's clearly false since most operating systems (which are long forgotten) predate it by decades, and in the specific context of Linux winning over FreeBSD, it's still not applicable as that happened smack dab in this era you describe.
> You may _think_ you have no reason to be sad if you're a user of Windows or Linux, but you have plenty anyway.
I'm a user of Linux, FreeBSD and 9Front. I just don't (and never have) bought hardware at random. You can reason your way into sadness any which way, but rationalization isn't always meaningfully justified. I just don't find it sad that my second desktop can't have an RX 9000 whatever in it. Where's the cut off line for that? Why not be sad that I can't jam a Fujitsu ARM processor into a PCIE slot as another type of satellite processor? The incompatibility is of the same effect, but I don't see you lamenting or even considering the latter, as though mounting a processor to a PCB is somehow fundamentally less possible than writing a modern graphics driver.
> And prior to that, you could grab every OS running on IBM clones and not have to worry about graphics drivers at all, because graphics acceleration wasn't a thing. The era you refer to had already introduced software contingency on hardware within x86. This disparity was further compounded in the mid-2010s as GPUs exploded in complexity and their drivers screamed into tens of millions of lines of code, eclipsing kernels themselves.
Not at all; I excluded this early era because you could _not_ be sure to find an OS that would support your graphics card at all, other than maybe what the BIOS supported. I am talking about the 90s because GPUs already had plenty of non-BIOS-supported features, like multiple CRTCs, weird fixed acceleration pipelines, weird coprocessors with random ISAs, and yet you could still find operating systems with 3rd party drivers supporting them.
It is a _perfectly_ distinguishable era. See how many OSes support 3D cards from the era like i9xx. Heck, FreeBSD itself qualifies, but also BeOS and many others.
In addition, I am talking about the _kernel_ part, which by any logic should be ridiculously simple. E.g. this is not a compiler to a random ISA or anything like that. It is what in Linux you would call a DRM driver, and the only reason they are complex and millions of LoCs is that they are under-specified, by AMD and the rest. Most of lines of AMD driver code in Linux are the register indices for each and every card submodel (yes, really, header files!), when it is clearly that before they would just have standarized on one set and abstracted from it. Compare AtomBIOS support in cards from a decade ago and cards from today. It is literally easier today for a 3rd party to implement support for the more complicated parts of the GPU (e.g. running compute code!), which AMD more or less documents, than it is to support basic modesetting support as it was in the 00s. This has happened!
Hardware may be more complicated, but interfaces needn't be more complicated. This, I believe, is a symptom, not the cause.
> I just don't find it sad that my second desktop can't have an RX 9000 whatever in it. Where's the cut off line for that? Why not be sad that I can't jam a Fujitsu ARM processor into a PCIE slot as another type of satellite processor?
You do not find it sad that there is no longer any operating system other than Linux supporting any amount of hardware, simple or not ?
Also, you call every non-Linux OS as "esoteric" as a counter-argument to my point , yet you try to use support for definitely esoteric hardware (which would be even hard to acquire!) as an argument for your point, whatever it is ? When I'm complaining that I can no longer rely on FreeBSD, literally the 2nd open OS with most hardware support, on supporting basic hardware (!) from this decade, when on the past I could more or less rely on _all_ BSDs supporting it, as well as a myriad other OSes , the argument that "oh well it never supported hardware that it is impossible to find in stores anyway, so I don't care" sounds pretty hollow.
Certainly even slightly deviating from the popular hardware has always resulted in diminishing returns, but today it is much worse, _except_ for Linux.
You boot an operating system on the machine, you have access to all unencrypted files, what is so strange about this ? You can do the same thing with Terminal. And smells of GenAI...
Actually this is a distinction worth clarifying, in Recovery Mode, Terminal does require mounting the data volume first, which typically prompts for an admin password. Safari bypassed this entirely, writing directly to protected system locations without any authentication. Furthermore, no GenAI was used in writing the article I come from an Egyptian Speaking background so my English may be a bit funky, sorry :)
I concur, that is the normal behavior without FDE. But besides, you can still use the Terminal of _any_ other bootable OS X disk, not just the recovery itself. With FDE, neither of this will work.
Yep. While the Terminal is not an option from the 4 apps listed in the initial screen, it's available from Utilities → Terminal at the top. They even provide a convenient way to access the hard drive from another computer: https://support.apple.com/guide/mac-help/macos-recovery-a-ma...
You're right that Terminal is accessible via Utilities, but Target Disk Mode and Terminal both require an admin password. Safari bypassed that authentication entirely, writing directly to protected system locations with no admin password
Apple tries to lock down access at the very least. They also patched the vulnerability twice (they restricted Safari for some reason and they also disabled the settings in the new version of Safari). It seems like Apple cares at the very least. Which is weird, because they also give you a terminal?
Lots of people I've met were surprised that I was able to get their photos from their windows laptops without ever needing their password. Especially these days in the age where even phones and Windows 11 will enable encryption by default, it's a tad weird that disk encryption isn't on by default on macOS. I, at the very least, was surprised that disk encryption isn't mandatory and always on on macOS, seeing the way Apple controls both the OS and the TPM firmware so that they're pretty much immune to the dreaded "BIOS update made my laptop ask for bitlocker" problem you get on Windows.
I don't really get why this would be AI generated, what makes you think that?
Completely agree on the encryption point. Apple controls the entire stack and could mandate FileVault encryption by default. The fact that it's opt-in is a weird decision that hasn't caught up with their security posture elsewhere.
On the Terminal point, its worth clarifying that Recovery Terminal does require mounting the data volume first, which typically prompts for an admin password. Safari bypassed that step entirely, which is what made it interesting.
Interesting point on the missing admin password, that does pose a slightly higher risk.
Though IIRC, at least the Intel Macbooks still support some kind of Target Disk Mode that should also bypass the admin password? I don't know if that requires an admin password but none of the guides I can find online state that it's required.
I come from an Arabic-speaking household so my English can be a bit funky sometimes, sorry. However I did use Claude to help format the CVSS tables and polish the grammar in the formal Apple submission (I was 17 submitting to a major company's security team for the first time). The research and findings however are entirely original.
EDIT: The person I replied to entirely rewrote their comment (with no indication they did so) so mine seems weird now, apologies for that.
Apple fixed the issue it seems, but did kind-of-sort-of ignore it. The argument from the OP is that it requires physical access, you don't need to convince the user to do anything, the attacker can do it...
...which Apple pointed out (in the article you're commenting on) that if FileVault was enabled this wouldn't be possible, which is true.
And if you have physical access and no encryption, then it's kind of game over anyway. But still, kind of neat to find something like this and Apple fixed it regardless
Cool, but isn’t this “the next time”, after “the next time”, after “the next time” already? These companies have been threatened, sued, and incentivized in numerous ways over many years, which has yet to be successful, and yet it seems like you are suggesting “just one more time” will be the impetus for change…this time…you swear…probably?
Note: I don’t disagree or agree, rather, I’m pointing out how flawed the logic is that just one more time will be what it takes.
> isn’t this “the next time”, after “the next time”, after “the next time” already?
No, it’s the time that it worked. The cable company upgraded. That’s all that matters. Whether it’s happened many times or not is irrelevant. The next time will come next time.
> which has yet to be successful
OP said they laid the fiber. It was literally successful. Preëmptively striking your service provider because they might screw you in the future is silly.
> We end with a real-life offer for any PC OEM that's willing to challenge the monopoly: Load the BeOS on the hard disk so the user can see it when the computer is first booted, and the license is free. Help us put a crack in the wall.
reply