Note that most of this text is from at least 2012 (check the wiki history, which starts in 2013 with an indication that this is an import of existing content).
"Commercial Unix" was a defacto phrase used by places like Gartner and IDG that meant the crop of things like AIX, HPUX, Solaris, and so on. It always excluded Linux and MacOS.
Google for phrases like "idg unix shipments solaris" and you'll see some of the context from that time period.
It still is. It's just bundled with the hardware, and always has been. What changed is that upgrades used to cost money, and now don't. You know it's an upgrade because you aren't allowed to install it on anything that didn't ship with MacOS X out of the factory.
Taken in combination, the two words "Commercial Unix" historically refer to Unix from Sun Microsystems, HP, IBM or SGI. They are not a reference to all variants of Unix that are sold.
I don't think Android is an officially licensed version of UNIX though? (I wouldn't normally nitpick, but I assumed that's what was meant by "commercial UNIX".)
macOS as of Leopard is “certified UNIX” (Single Unix Specv3). Darwin is the “Unix core” that macOS is built on, but it hasn’t been certified by itself[0].
It’s POSIX conforming but nobody, no distribution, has ponied up the money or taken the time to do the conformance testing necessary to get branded as UNIX. Additionally, it sounds like this testing would have to happen with every new release.
Why spend all that money and time when Linux has so dominated the certified Unices for some time?
> [Shanghai, China, September 9, 2016] Today Huawei announced that its EulerOS 2.0 operating system for its KunLun mission critical servers has earned UNIX 03 certification from The Open Group. This means that Huawei EulerOS 2.0 has been officially certified as a UNIX system.
> Inspur K-UX 2.0 and 3.0 for x86-64 were officially certified as UNIX systems by The Open Group.
[snip]
> Inspur K-UX 2.0 was one of six commercial operating systems that have versions certified to The Open Group UNIX 03 standard (the others being macOS, Solaris, IBM AIX, HP-UX, and EulerOS).
> Systems that have been licensed to use the UNIX trademark include AIX,[35] EulerOS,[36] HP-UX,[37] Inspur K-UX,[38] IRIX,[39] macOS,[40] Solaris,[41] Tru64 UNIX (formerly "Digital UNIX", or OSF/1),[42] and z/OS.[43] Notably, EulerOS and Inspur K-UX are Linux distributions certified as UNIX 03 compliant.[44][45]
Darwin possibly shares still the few files identified from infamous Unix wars lawsuits, but it doesn't really share code with Solaris (which is System V)
Only the ISO C standard library is part of the public API for Android apps, use anything else at the peril of having the app crash across devices or being killed by Android security measures that prevent access to private APIs.
EDIT: To make it more explicit, given that people keep mistaking Android for Linux,
The Android ISO C standard library is derived from the BSD libc, which is derived from the UNIX libc, so there's clear evidence of UNIX in Android userland.
Isn’t android a fork of the Linux kernel wrapped in a whole lot of Java? It unquestionably dominates its market, smart phones and it’s-a-laptop-and-definitely-not-a-netbook kind of devices, but even as a Linux variant, it seems very much in its own little world. Not the most popular commercial Unix implementation so much as the most popular OS ever, that also happens to be more Linux-like rather than Unix-like.
Its primary interface with user/developer is indeed JVM based languages. However, Android is more than that. It is a capability-based microkernel-like OS implementation on top of Linux. Everything in Android goes through IPC. Every process is heavily isolated using SELinux and permissions including drivers.
Android isn't unix. It has a Linux kernel (which may or may not be unix, a combustible discussion I'd rather avoid), but the user space is definitely not.
Solaris is a SysV Unix, whereas macOS is a BSD, not sure they were thinking of that distinction. Anyhow, when you’re talking about mounting /usr from the network, it’s safe to assume server workloads where macOS is simply nonexistent.
I think a lot of people who think "macOS isn't Unix" (or similarly, "z/OS isn't Unix") aren't aware of the legal status of the term "Unix" – that it is a registered trademark owned by The Open Group, and legally a trademark means whatever the trademark owner says it means – and The Open Group is very clear that "Unix" means any system, irrespective of its marketing or heritage or other functions, which passes their test suites and whose vendor pays the licensing fee.
MacOS is a commercial OS that happens to have a Unix substrate, but does not tend to show it too much. The Unix part is an implementation detail. much like the CPU architecture. It is not marketed as a Unix workstation, much like Android phones are not marketed as Linux handhelds.
MacOS is a certified UNIX, and Android phones cannot be marketed as Linux handhelds, because there is zeo Linux on their Java, Kotlin, ISO C and ISO C++ userspace for app developers.
And for a couple of years after they sorted out the compliance, they really pushed UNIX in their advertising materials. After all, they paid more than $20mn to sort out compliance.
> MacOS is a commercial OS that happens to have a Unix substrate
I am not sure what you’re implying. There always have been a lot of UNIXes that were “commercial OS”. Historically, the free branches of the UNIX family tree have been way out-numbered by commercial distributions.
> but does not tend to show it too much
That does not really matter (besides the fact that the terminal has always been there). The fact that there is a GUI shell on top does not change the architecture of the OS.
> The Unix part is an implementation detail.
These days, UNIX is a certification. Otherwise it refers to the architecture and origins of the OS, from the initial AT&T UNIX. The only purity test is the certification.
> It is not marketed as a Unix workstation, much like Android phones are not marketed as Linux handhelds.
How it is marketed and what it actually is are orthogonal concepts. It is UNIX, whether this annoys some people or not.
> With the merged /usr directory we can offer a read-only export of the vendor supplied OS to the network, which will contain almost the entire operating system resources.
Back in 2005 or 2006, I made a Linux that exported the filesystem this way, and it worked by mounting / as a network volume. This worked by doing a particular dance with "pivot_root" at the beginning and a union mount with a tmpfs volume. We used it to play LAN games in the computer lab. Because the network booting Linux didn't touch the local hard drive, you could clean up after yourself just by shutting down the machines. It was also nice and fast, because the machines all had gigabit ethernet and the file server was serving from cache most of the time. It ended up being noticeably better than running off local hard disk... this was the era before SSDs took over, obviously.
I got a couple extensions on homework assignments in college on account of the entire computer lab being dead because Solaris and NFS getting into a cage match with each other. The admins had to restart things Apollo 13 style.
it was.also used to share /usr in a large number of virtual guests. large as in thousands of guests, for example "your personal web server with root account and perl/php" running on a s390x mainframe.
> /bin/ User utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
> /sbin/ System programs and administration utilities fundamental to both single and multi-user environments. These programs are statically compiled and therefore do not depend on any system libraries to run.
That distinction only matters when you have / and /usr separate, which basically never makes any sense in practicality.
There is some case to throw /usr/share on separate partition as some packages dump quite a lot data there (kicad for example dumps part libraries there which are like 5+GB), but that's the single case in "basically never" I had something mounted in or under /usr.
But honestly, if you have a case of "I REALLY need a rescue environment that's statically compiled".... just throw a busybox binary on your root partition somewhere. Most distros have it in "busybox-static" package.
> That distinction only matters when you have / and /usr separate, which basically never makes any sense in practicality.
No? The distinction could also be self-imposed, i.e. you just put statically linked "rescue" binaries in /sbin just for the sake of order/principle, even if it happens to be in the same disk or the same partition as /usr.
It may be absolutely pointless, but then so would be literally any other partitioning of files you can think about. E.g. what's the point of creating a directory for each program ala Windows/OSX? What's the point of making sbin vs bin? What's the point of putting data files in share instead of lib? What's the point of creating any hierarchy whatsoever instead of just throwing everything inside the same directory and let the package manager handle it?
You can find very thin arguments for all of these, most of them having to do with historical reasons or subjective reasons, like some human abstractly-defined sense of simplicity, "cleanliness" and order.
And in those months some /sbin programms are now dynamically linked (for security) and the quotes have changed
> /sbin/ System programs and administration utilities fundamental to both single and multi-user environments. Most of these programs are statically compiled and therefore do not depend on any system libraries to run.
That's a nice distinction, but it's not really practical on Linux. Statically linking glibc is explicitly not supported (and for arguably good reasons).
Basically, glibc uses dlopen to open certain shared libraries for things like nsswitch and locales. Using dlopen in a program that's statically linked to glibc causes all kinds of issues, as you'll likely end up with multiple copies of potentially different versions of glibc in the same process.
What would it take to put out a new edition of FHS, that permits (doesn't require, just permits) layouts with everything in /usr and symlinks from the directories in / to the directories in /usr?
(Assume someone was willing to write and supply the text.)
I mean, as someone which job SystemD made noticeably easier (a lot of stuff it does is blessing for ops, and a lot of "simple" init scripts are actually subtly broken, despise how "easy" it is to write them) I still don't like his attitude and "it works on my laptop, clearly what the rest of the world needs" design decisions around stuff he touches.
I have absolute confidence that he's entirely focused on stuff working well on his laptop and that's about that.
You can just smell man have attachment issues with his design decisions and code. Reimplementing stuff that works fine, but badly also seems to be his fucking hobby
> That means scripts/programs written for other Unixes or other Linuxes and ported to your distribution will no longer need fixing for the file system paths of the binaries called, which is otherwise a major source of frustration
I don't know what this guy is smoking, but this has never been a problem.
I used to work at a startup where one (1) engineer ran FreeBSD. When he joined, he sent a pull request editing the shebangs of all our shell scripts from #!/bin/bash to #!/usr/bin/env bash – because evidently FreeBSD has bash under /usr/bin, but all our systems agreed on where to put env.
I still write env bash shebangs to this day, in his honor. Happy to hear there's progress towards ending this archaic separation.
The main reason to use env shebangs is versioning. MacOS ships Bash v3, but Homebrew installs Bash v5. Which shebang will you use - #!/bin/bash, #!/usr/bin/bash, #!/opt/homebrew/bin/bash, #!/usr/local/homebrew/bin/bash ? Same for different versions of Python (in venvs, pyenvs, etc) and other applications where you may want to run different versions. With the env shebang you can always change your PATH to choose your version.
That’s probably a good practice anyway. /bin/sh is guaranteed to exist, but there’s no reason to assume bash is installed at all. If it is installed, /usr/bin, /usr/local/bin, /opt/bin are all reasonable locations.
Even /bin/sh is not guaranteed to exist, and when it does exist, it is not guranteed to be useful. It is not a location specified by POSIX, and implementations exist that do not place the standard sh utility in /bin. The big one was SunOS/Solaris, where /bin/sh did exist but was the pre-POSIX implementation, and /usr/xpg4/bin/sh existed for when you wanted POSIX sh. Android puts it in /system/bin/sh and not all of them have the compatibility symlink /bin -> /system/bin. There are probably other special cases as well.
POSIX explicitly says "Applications should note that the standard PATH to the shell cannot be assumed to be either /bin/sh or /usr/bin/sh, and should be determined by interrogation of the PATH returned by getconf PATH [...]"
And that’s the way it is in FreeBSD… /bin/sh exists, but it isn’t bash. Instead, bash must be installed by the user, and is thus installed in /usr/bin.
It's sort of funny, AFAIK the usual way to effect the merger is to place a symlink from /bin to /usr/bin. This is how e.g. Arch Linux does it. So ironically, the archaic #!/bin/bash continues to work normally on /usr merged systems.
On slightly related subject what’s the best way to solve the shell path problem when you are using LDAP with the userShell attribute? If I create a user and set its shell to /bin/bash but that user logs into a system where bash is on another location (say MacOS or wathever)?
Bash is distributed by GNU, not FreeBSD. FreeBSD distributes instructions for fetching, compiling, and packaging Bash (the ports tree and the Makefile for the bash package), but that is not the same as distributing Bash. Additionally, Bash is developed by GNU without being sponsored by FreeBSD; it is neither first- nor second-party software, leaving third-party software as the only option left.
To assume otherwise would have to assume that all ports come from FreeBSD and that's a bit absurd. I don't think anyone's going to say that Bash, or say KDE, is a "FreeBSD project" any time soon.
And all that just to point out that /usr/local on FreeBSD is intended for non-OS software installs (it's not the only option you have on the system, /opt isn't unheard of, but mixing up /usr/bin with non-OS software is a fast recipe for disaster).
I know about this, and in fact I wrote an alternative solution for GNU Coreutils env in 2017: a complete patch with updated documentation and test cases.
Here is a problem: interpreters take arguments such as options ... but, separately, so do interpreted programs. Sometimes, you'd like to control both.
I made an extension to env whereby you could do:
#!/usr/bin/env :interp:--foo:{}:--bar
This would find the "interp" interpreter using a PATH search and then pass it the arguments --foo <scriptname> --bar, followed by arguments that came from the invocation of the hash-bang script. If the special argument {} does not appear, then interp will be invoked with --foo --bar <scriptname>: the script name is not relocated into the arguments.
However, the maintainer expressed a preference for compatibility with FreeBSD -S.
I took a look at the requirements and saw that FreeBSD's env -S was doing a whole lot of stuff: interpreting C-like character escape sequences like \n, and interpolating dollar-sign-sigiled variables like $USER. In spite of all that, didn't see the feature of being able to insert the script name into the middle of the arguments, like in my solution.
It amounted to throwing away my patch and implementing some FreeBSD stuff I had no interest in and a lot of which I thought was a bad idea, while failing to achieve my original functionality. So, I just did the first part: throwing away my patch.
The typical use case for env -S is a script passing arguments to the interpreter (#!/usr/bin/env -S interp --foo → interp --foo ./scriptname), and the user can always pass arguments to the script (./scriptname --bar → interp --foo ./scriptname --bar).
But why would a script ever need to use a shebang to pass an argument like --bar to itself? Can’t it just modify its own argv, or act as if it had?
> It is useful because it allows the hash bang to specify some arguments after the script (which could be arguments belonging to the script rather than to the interpreter, for instance).
but doesn’t really explain why the script itself would want to specify that.)
In any case, compatibility is important here, in order for it to be possible to write cross-platform scripts.
Here is a use case not involving the motivation to pass canned options passed to the script.
Suppose that the interpreter is such that the script name must be passed as an option. For instance, Awk implementations are like this:
awk -f <script>
The problem is that this kind of option is allowed to be followed by more options.
awk -f <script> -Z # error under GNU awk: -Z is an invalid option
See where this is headed? If we have
#!/usr/bin/awk -f
awk script here
and we call this as
$ ./script -Z
it will pass that option to Awk. But the script wanted to handle that option!
$ ./script -- -Z # process -Z as argument to the script
and wouldn't it be nice if we could hide this "--" argument in the hash bang?
With my proposal you could do that:
#!/usr/bin/env :awk:-f:{}:--
Now speaking of Awk, GNU awk has solved this particular problem itself. It has the option -E/--exec which works like -f, but is the last option to be processed.
(Any of these kinds of issues can be solved locally for a given interpreter, if you control its implementation.)
You could always fixup all of your scripts so they point to the correct point in PATH. this also has the benefit of notifying you if a program doesn't exist. Do note that env is unportable (freebsd has -P I think), which makes parsing it a PITA, identifying where the program actually is, is slightly challenging. If you ask me its better to throw the program to sh and let it eval to whatever. sh is the true portable launcher.
I wrote [0] in lua a while back (mind the comments), didn't destroy anything I think, I wrote [1] just now in gawk, so not as recommended.
The writer of the hash bang script retains the control not to have PATH lookup take place: they write an interpreter name which contains one or more slashes.
I'm guessing that Zsh is doing this in its fallback strategy for when the exec fails.
The exec fails when a shell tries to run an executable file that has no hash bang header, but also in a case like this when the header doesn't give a path that resolves to an interpreter.
Without the #!echo, I'm guessing Zsh would have interpreted the file as as Zsh script.
I’d suspect the problem is a stripped down container where the profile isn’t setup how you’re expecting. The interactive shell (bash, I assume), will setup the profile/env properly. I’m not sure your entrypoint cmd will setup the environment, as IIRC, the inir process would normally take care of this setup. But an entrypoint in a container has different rules.
Perhaps if you started your entrypoint with “/bin/bash cmd”??
If PATH isn't set up right then #!/usr/bin/env interpreter -S arg, which everyone uses, also won't work. It doesn't amount to a good argument why #!interpreter arg shouldn't be able to do that path search without the env shim or its need for the -S mechanism.
I have encountered it. When writing a shell script do you write #!/bin/bash or #!/usr/bin/bash at the top? I have had scripts fail when moving to another machine because of having the wrong one. Same thing with /bin/env vs /usr/bin/env
It's really not a huge issue for me, but it is at least a minor frustration. (And for people who maintain scripts across different distros / *nixes, I'm sure that it is a larger frustration).
Ditto. I laughed so hard on that one that I nearly choked. The Linux community as a whole hasn't been interested in compatibility with Solaris for nearly two decades, and pretending otherwise is nothing short of comedy.
I’m confused about your link. It says “The long-anticipated demise of the
/usr/ucb and /usr/ucblib directories is planned for the next major
version of Solaris.” What does it mean by “next major version”? Solaris 12? Well, not long after this blog post was written, Solaris 12 was cancelled, so if that’s what it means, it is talking about something that’s never actually going to happen.
I recall, with some amusement, the uproar this caused when proposed a decade ago. Luckily sanity prevailed over the kneejerkers in the end, and today our systems are a little bit simpler, and another tiny gratuitous portability issue is no more.
Hmm, I end up dealing with differences caused by this all the time.
I'd say the biggest issue is that the one benefit I'm finally reaping, and apparently mentioned (badly) at the time, is the one benefit that all the mentions of merging explicitly disavowed as impossible and that dinosaurs like me should go and Bury themselves.
Namely, separate /usr partition made readonly. Serving readonly FS or shared FS for systems has been harder with unmerged /usr because of /etc (among others).
However, the arguments for merging that I heard were "nobody does separate /usr anymore you dinosaur", "nobody wants to share filesystem between multiple system instances", and finally from the proposers themselves, "split / and /usr mean I have to keep track of where I'm putting files in RPM depending on whether they are early boot or not, boohoo".
So recently I've been working on building a customised Gentoo, based on ChromeOS/CoreOS/Flatcar changes which tells emerge to use /usr as target... And for the first time in forever I have use for merged /usr and it's an use that was ridiculed when merged /usr was proposed (but use that was explicitly (ab)used by Solaris, btw)
"Myth #11: Instead of merging / into /usr it would make a lot more sense to merge /usr into /.
Fact: This would make the separation between vendor-supplied OS resources and machine-specific even worse, thus making OS snapshots and network/container sharing of it much harder and non-atomic, and clutter the root file system with a multitude of new directories."
Yea, the real goal here is to be able to mount the whole OS in one operation. To that end, /dev is now autopopulated by udev and you can even leave /etc completely empty. The goal is for everything to have good enough defaults that having a config file in /etc is unimportant unless the admin wants to override something. Probably the only things in there when you install a new system would be the root password in /etc/shadow and your user and group information in /etc/passwd and /etc/group.
It should make a lot of tasks a lot easier. The one I like most is trivially setting up containers or VMs that have your base OS in them plus some new software that you are testing. For example, at work we use Debian with a bunch of extra packages installed, plus our own package containing all manner of software and configuration that we’ve written. Although we do a fair amount of unit testing, and the Rust compiler really has our back, it is very difficult to really test things outside of production. Currently we have to painstakingly set up a test environment on a production machine, using tricks like altering our PATH so that the test version of the tools we are working on will be called, or editing things so that they call the version under test instead. It’s doable but laborious.
If we were using Fedora instead of Debian, we would be able to use systemd-nspawn to spawn a container with the existing OS, read–only mounts containing our databases and other large datasets, and an overlayfs to redirect any writes to those file systems to an alternate location. Installing and running our software inside that container would work exactly as if we were in production, and testing everything would be so much simpler.
systemd-nspawn is available on Debian, but running it requires keeping extra copies of the OS for each container you want to spawn, so we haven’t actually gotten around to doing it yet. Oh, and each container needs a properly set up /etc directory. We can’t just leave it empty, and we can’t just blindly copy the host’s either.
This myth however was inverted by people pushing for /usr merge on various media platforms at the time.
A common example of "what's wrong with split /usr" that was pushed was things like dropping pci.ids database in /usr/share and it being required by code mounting /usr, and that mounting / and /usr separately in initrd was a wasteful complexity.
In fact, today is the first time I have heard of reasoning that supported /usr on separate partition and I thought it was just an inversion done later by people who noticed possible benefits for some extra initrd complexity (btw, dracut and systemd oriented initrd scripts are stupidly complex. Including generating systemd units by text templating in udev rules as part of dracut boot process. Wtf)
I dont think that was quite the right argumentation. Or at least not clear enough.
In general, the desire is to put the system image into /usr. A single partition, that holds literally everything you need to launch an os. If you can mount a /usr, everything else will populate. If you want to atomically swap system images, just swap /usr.
Imagine the counter: what would happen if we just swapped /bin? Shit would be a mess. /lib and /usr/lib wouldnt get updated in sync. /include and /share wouldnt be in sync. The idea is that the Unified System Resources ought unifiedly be swapped out. And the system specific resources(/bin, /lib, /include, et ceteta) ought not exist, provide no real contemprary value, and making image swapping a much much harder to accomplish well.
Yea, it’s pretty funny. It used to hold the home directories, but the disk holding the OS ran out of space, so they started putting new utilities in /usr/bin. The distribution tapes they sent out to everyone who ordered a copy were made directly from the live system, so we all inherited the same quirk. We’ve moved the home directories elsewhere, and now the whole OS is climbing into /usr in order to be atomically mountable. I keep thinking of that Gilbert and Sullivan song with the line “it gradually got so!”.
Even funnier, now a lot of software pollutes your home directory with shared files and dotfiles, so that space is invaded too. I resort to creating a separate files directory, either in /files, my home directory or in /mnt/files, and store all my local files there.
I’m not as bothered about that. I do have a git repository called dotfiles with some of the more important ones. I then use GNU Stow to install them from there to their customary locations. That helps me keep them in sync across computers.
CoreOS Container Linux (haven't checked out Fedora CoreOS) and its' fork Flatcar Linux do that. I couldn't find any good doc about this, but the idea is that:
- RO /usr contains all the system, the rest is populated with tmpfilesd
- There is two /usr partitions, the active one is selected with GPT priority attribute
Generally the idea is more for atomic system upgrades, and less for package managers. Why have package managers when you can just mmdebstrap a new image & try start it, but keep all your /etc and /home stuff? That's the dream. Package managers become semi-obsolete. I think there are a variety of ideas for how package managers might come into compliance & make use of this though! A lot of possibility. But the notion of OSes that can swap images, that transcend the idea of one special bucket of bits that we slowly move forward, is kind of new. Containers are 100% there, are making it clear. But the conventional OS is still lagging, learning, behind. I recommend checking out systems like Wyrcan to see a little deeper how we can eliminate some of the historical OS layers & build a more modular, more secure, more atomic, more application centric OS[1].
A lot of it relies on better filesystems like ZFS or Btrfs which have snapshots. I myself twiddled around with writing some Debian-Live boot tools that used btrfs snapshots, copied onto ram-drives[2], that would potentially work well with package managers (boot either your starting image, or your "new" snapshot image).
These ideas predate (the deeply reviled by some haters) Lennart's post, "Revisiting How We Put Together Linux Systems"[3]. But Lennart here, in mid 2014, understood what containers were doing, understood how atomic system upgrades were trending, understood what was happening. This isn't a Lennart idea, but Lennart as usual semi got the gist of what made sense, what would help, where we were going. He captured the good idea of what was occuring already into a post. And calling out /usr as "the system image" was a clear & obvious win for that day, back in 2014, and it's still a clear & obvious win today to consolidate the OS into one directory today. May it be so. This is the way. So say we all. It is known.
The "usr" meaning user is seemingly in fact true. But it's also, imho, wrong? The backronym is more truthful, better describes what role /usr plays & has played for multiple decades now. "user" doesn't really mean anything reasonable, clear, or cogent.
You could achieve the same thing with sym links, which is basically what Apple does with newer versions of macOS. The main OS partition (/) is read only, essentially immutable (unless you disable SIP.) There is a writeable volume mounted on /private (in a funky way, using "firm links".) /etc, /var, /home, and a couple others are symlinks to /private/etc, /private/var, /private/home ...
It's the standard upgrade method on most deployed desktop/laptop linux, a certain Gentoo derivative known as ChromeOS, and from there it's also common update mechanism of Google COS (default for GKE nowadays) and Flatcar (not sure if Fedora mangling of CoreOS uses it anymore as they reportedly use rpm-ostree)
Another idea: what if we don't ever swap images like this?
It looks like a bad idea that I would never want to do.
This is what your chroots and jails are for.
But if I wanted to do it in style, I'd hack up the cool kernel support to atomically mount multiple directories as part of one atomic swap.
How can you call it "atomic" if you have executables still running from the old /usr/bin? Say there is some script running that is executing external programs. At some unspecified point in its execution, /usr/bin/whatever changes meaning; all newly run programs are different versions on a different libc and so on.
If you have enough control of the system that there are no such processes, then you can easily do the swap directory by directory: just do it with system calls out of a single process, rather than by invoking tools, so that nothing trips up at the point in time when you have a mismatched /bin and /lib.
Nothing you'be talked about sounds like a remotely normal thing that is documented or discussed anywhere and I doubt there's more than a fistful of orgs on the planet that have jury rigged that up. Meanwhile there are at least a dozen distros doing atomic image upgrades. Fedora SilverBlue for one example.
These upgrades often involve a kexec restart. For fleets of servers- the target audience- it's expected that machines will go down & rolling restarts should be fine. That's how true atomic behavior is gotten. But I think you are nitpicking & being fussy with your complaining about old processes being left around; even if you do an in-place swap, but have some old processes around, at least the whole system is together & consistent. More broadly, thjs set of problem in general doesnt change much whether you are using a merged /usr or not.
I genuinely dont get any of your disgust at having a /usr image that contains the functioning system image. You talk about chroots, as though it's an alternative somehow, but this practice of having a self-contained image seems like it would greatly assist in quickly cloning new images to chroot into.
If "atomic" means "stop the world running one system, and reboot into a different system", we can easily arrange to reshuffle two or more directories. The program which does that can be easily designed not to be affected by the intermediate states, like where you have a new /lib, but still the old /bin.
Not really. A common spiel of the time was that having to mount / and /usr due to packager laziness started requiring it b handled in initramfs making it more complex. Previously / was supposed to be able to boot into minimal standalone system that could mount the rest of itself.
So the common argument was that one should accept that readonly separate /usr and snapshots and whatsoever you wanted weren't going to be a thing because fedora didn't want to maintain support for mounting /usr in initramfs.
Hurray! Finally we slowly making progress towards c:/ !!!
I can not wait until /etc also ceases to exit in turn for an SQL-LITE variant run by systemd. We are really living in the future. Monolithic central administration tools, fewer system folders... Yes this is it. I really hope everybody else is as happy as me.
/etc /proc /sys /home /root /usr /bin /sbin /mnt /boot who needs all of these anyway? Am I right? Lets simplify stuff make it easy. Also just as an added note please kill the Man pages system its just make it so that I can click a link the automaticaly opens my browser to the documentation I need or known forums! Screw living in the terminal, make it all invisible and make sure I do not need to know how it actually works. That way I can just call RedHat support the same way we do with Microsoft support.
The Unix way will prevail. I am happy the BSDs continue with the philosophy. Old does not mean outdated and tradition does not mean backwards. One thing only and do it well.
> Improved compatibility with other Unixes/Linuxes in behavior:
Well, some of them.
> Improved compatibility with other Unixes (in particular Solaris) in appearance:
I guess, but if you supported Linux you already supported Solaris, so if you think of Solaris as second class then nothing gained.
Also: Solaris is (to me) clearly on its way out. Not worth a migration to converge with it.
> Improved compatibility with GNU build systems:
This one is pure hogwash. Either you need to support the systems that don't do this (e.g. OpenBSD), or you only care about Linux. (this is a bit simplified, but pretty true)
So your build system still needs to support non-merged.
Except now you're going to bake in Linux-specific merge assumptions into your build system, so that they'll be even more broken.
> Improved compatibility with current upstream development:
This seems more like tech debt to me, in the name of expedience.
I'm here not saying it shouldn't be done. I'm saying these are terrible reasons.
> > Improved compatibility with GNU build systems:
> This one is pure hogwash. Either you need to support the systems that don't do this (e.g. OpenBSD), or you only care about Linux. (this is a bit simplified, but pretty true)
> So your build system still needs to support non-merged.
> Except now you're going to bake in Linux-specific merge assumptions into your build system, so that they'll be even more broken.
Well considering
> Not implementing the /usr merge in your distribution will isolate it from upstream development. It will make porting of packages needlessly difficult, because packagers need to split up installed files into multiple directories and hard code different locations for tools; both will cause unnecessary incompatibilities. Several Linux distributions are agreeing with the benefits of the /usr merge and are already in the process to implement the /usr merge. This means that upstream projects will adapt quickly to the change, those making portability to your distribution harder.
I think they intended this to play out like SystemD/logind - Red Had maintainers will only care about merged /usr and if you want to use anything they have their hands in you better adapt.
> I think they intended this to play out like SystemD/logind - Red Had maintainers will only care about merged /usr and if you want to use anything they have their hands in you better adapt.
The argument this article seems to make is that you can essentially hard code the paths to binaries in your build system, because /lib is /usr/lib, /bin is /usr/bin (from build script point of view it doesn't matter which is symlink to which).
It seems to be saying that your build systems no longer need to search for the libraries.
But that's not true. You still do.
And if you search for it then it doesn't really matter where it is. Clearly if you search for `bash` you'll search /bin and /usr/bin. For at least two reasons:
1) Maybe you're running a system that isn't merged. Then it could be in either place. Doesn't matter if RedHat is merged.
2) You'll need to check other locations anyway. Like /usr/local/bin/bash, to support some BSDs. Why does it matter if it's two, three, or $(echo $PATH | sed 's/[^:]//g' | wc -c) paths?
And for libraries you may need to search in /opt, and run pkgconfig for any compile and link flags needed. Why does it matter that `pkg-config --libs foo` contains an -L flag? You still need to run it.
Why not go in reverse and have /bin and /lib directories again? Why merge into /usr? I feel like the suckless people and fellows like them will be happy.
I think even /usr/share can get its own directory, why not?
Because then you can mount /usr from wherever, which is much harder to do with /. thus merging in /usr provides utility which are harder and less convenient to provide the other way around.
Because if you merge you can put the entire operating system in /usr, which allows you to mount it read-only, or from an NFS share, or various other options, easily.
I wish we could just unify /usr/local/bin, /usr/bin, /usr/sbin, /bin, and /opt. I'm a Linux user for 15 years and I still don't know if there's a good modern reason to have so many different directory trees for binaries and what that reason is. It's all historical craft that makes it so hard to find things as far as I am concerned.
The OS promises not to touch /usr/local other than to set it and its subdirectories up. This allows two things:
1. /usr/local is usually set up to take priority over /usr, so if you want a newer C compiler or whatever you put it in /usr/local but keep the system compiler in /usr because you might need it for an OS patch or something.
2. If you see an error from something in /usr/local you know not to bother your OS vendor about it because it came from an outside source.
It's largely the same theory as /opt, except stuff in /opt shouldn't link to any libraries in /usr while stuff in /usr/local can (stuff in /opt should be self-contained).
Also, in those days /usr was kinda sorta like $HOME today for "user files", until one day someone replicated the / hierarchy into there to get more space.
On Hurd you probably could, just mount a merge translator over /bin that merges all the directories you want. IIRC on Hurd translator namespaces are per-process in the libc and are inherited from parent to child process. So like containers/namespaces on Linux but implemented in the libc.
I'm more BSD, but I have come to like 'local' and wish it were further refined so we can mount usr & bin as read-only and be able to rebuild machines with a backup of local and I guess etc.
- If you have your own system, with very tightly controlled specifications, you can make any change you want, and its impact will be easy to manage. When you change the specifications of random people's systems, random things happen. Doesn't matter what the change is. A file showing up where it didn't exist before, or being a symbolic link rather than a regular file, or duplicate files, will cause logic bugs.
- There isn't a single mention of any downside to this fundamental shift in the expectations of the filesystem of major distributions. Yet you can be guaranteed that this will break compatibility with some applications/systems. They still they make no attempt whatsoever to estimate this.
- It can't be stated enough that this is a Fedora / RedHat / Lennart Poettering thing. Quite frankly, anything they suggest should ring giant alarm bells. This group is singly responsible for every incompatible, over-designed, pain-in-the-ass change to Linux distributions in the past 10+ years. If it causes problems, and it didn't come from a hardware vendor, it came from these people.
- Who in the actual fuck cares about filesystem-level porting from Solaris?! If you're already dealing with all the rest of the portability issues, file paths are literally the very last and least-difficult consideration. "Oh no! In what directory will I find 'pkill' ?! It's so difficult!" Only a company obsessed with converting whales to their platform and want one more bullet point to add to their sales deck cares. I don't actually care if this goes through, but just the fact that RedHat is still shilling their unnecessary changes as if anyone else other than them wants it makes my skin scrawl.