The main problem here is wanting to hang on to the "bespoke version soup" attitude that language package managers encourage (and is totally unsustainable). The alternative Mise doesn't appear to have any ability to understand version constraints between packages and certainly doesn't run tests for each installed package to ensure it works correctly with the surrounding versions. So you're not getting remotely the same thing.
Bespoke version soup is unsustainable, but part of why people keep doing it is that it tends to work fine. It tends to work fine in part because OS-level libraries come from a different, much more conservative world, in which breaking backwards compatibility is something you try to avoid as much as possible.
So they can take a stable, well-managed OS as a base, use tools like mise and asdf to build a bespoke version soup of tools and language runtimes on top, then run an app on top of that. It will almost never break. When it does break, they fiddle with versions and small fixes until it works again, then move on. The fact that it broke is annoying, but unimportant. Anything that introduces friction, requires more learning, or requires more work is a waste of time.
Others would instead look for a solution to stop it from breaking ever again. This solution is allowed to introduce friction, require more learning, or require more work, because they consider the problem important. These people want Nix.
Most people are in the first group, so a company like Railway that wants to grow ends up with a solution that fits that group.
Package maintainers often think in terms of constraints like I need a 1.0.0 <= pkg1 < 2.0.0 and a 2.5.0 <= pkg2 < 3.0.0. This tends to make total sense in the micro context of a single package but always falls apart IMO in the macro context. The problem is:
- constraints are not always right (say pkg1==1.9.0 actually breaks things)
- constraints of each dependency combined ends up giving very little degrees of freedom in constraint solving, so that you can’t in fact just take any pkg1 and use it
- even if you can use a given version, your package may have a hidden dependency on one if pkg1’s dependencies, that is only apparent once you start changing pkg1’s version
Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you. So while you can’t say take a version of pkg1 from 2015 and use it with a version of pkg2 from 2025, you can just take the whole 2015 Nixpkgs and get pkg1 & pkg2 from 2015.
There’s no clear definition (in most languages, of major/minor/patch versioning). Amazon did this reasonably well when I was there, though the patch version was implicitly assigned and the major and minor required humans to follow the convention:
You could not depend on a patch version directly in source. You could force a patch version other ways, but each package would depend on a specific major/minor and the patch version was decided at build time. It was expected that differences in the patch version were binary compatible.
Minor version changes were typically were source compatible, but not necessarily binary compatible. You couldn’t just arbitrarily choose a new minor version for deployment (well, you could, but without expecting it to go well).
Major versions were reserved for source or logic breaking changes. Together the major and minor versions were considered the interface version.
There was none of this pinning to arbitrary versions or hashes (though, you could absolutely lock that in at build time).
Any concept of package (version) set was managed by metadata at a higher level. For something like your last example, we would “import” pkg2 from 2025, bringing in its dependency graph. The 2025 graph is known to work, so only packages that declare dependencies on any of those versions would be rebuilt. At the end of the operation you’d have a hybrid graph of 2015, 2025, and whatever new unique versions were created during the merge, and no individual package dependencies were ever touched.
The rules were also clear. There were no arbitrary expressions describing version ranges.
For the record, Amazon's Builder Tools org (or ASBX or whatever) built a replacement system years ago, because this absolutely doesn't work for a lot of projects and is unsustainable. They have been struggling for years to figure out how to move people off it.
Speaking at an even higher level, their system has been a blocker to innovation, and introduces unique challenges to solving software supply chain issues
Not saying there aren't good things about the system (I like cascading builds, reproducibility, buffering from 3p volatility) but I wouldn't hype this up too much.
> Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you.
Thank you, I was looking for an explanation of exactly why I hate Nix so much. It takes a complicated use case, and tries to "solve" it by making your use-case invalid.
It's like the Soylent of software. "It's hard to cook, and I don't want to take time to eat. I'll just slurp down a bland milkshake. Now I don't have to deal with the complexities of food. I've solved the problem!"
> I was looking for an explanation of exactly why I hate Nix so much
Note that the parent said "I think Nixpkgs takes the right approach in mostly avoiding it". As others have already said, Nix != Nixpkgs.
If you want to go down the "solving dependency version ranges" route, then Nix won't stop you. The usual approach is to use your normal language/ecosystem tooling (cabal, npm, cargo, maven, etc.) to create a "lock file"; then convert that into something Nix can import (if it's JSON that might just be a Nixlang function; if it's more complicated then there's probably a tool to convert it, like cabal2nix, npm2nix, cargo2nix, etc.). I personally prefer to run the latter within a Nix derivation, and use it via "import from derivation"; but others don't like importing from derivations, since it breaks the separation between evaluation and building. Either way, this is a very common way to use Nix.
(If you want to be even more hardcore, you could have Nix run the language tooling too; but that tends to require a bunch of workarounds, since language tooling tends to be wildly unreproducible! e.g. see
http://www.chriswarbo.net/projects/nixos/nix_dependencies.ht... )
I mean you can do it in Nix using overlays and overrides. But it won’t be cached for you and there’s a lot of extra fiddling required. I think it’s pretty much the same as how Bazel and Buck work. This is the future like it or now.
It's the idea that every application can near-arbitrarily choose a bespoke-but-exact mix of versions of every underlying package and assume they all work together. This is same attitude that leads to seemingly every application on planet earth needing to individually duplicate the work of reacting to every single dependabot update for their thousands of underlying packages and deal with the fallout of conflicts when they arise.
Packages in nixpkgs follow the "managed distribution" model, where almost all package combinations can be expected to work together, remain reasonably stable (on the stable branch) for 6 months receiving security backports, then you do all your major upgrades when you jump to the next stable branch when it is released.
Nix generally speaking has a global "nixpkgs" version (I'm greatly over-simplifying here ofc) in which there is a single version of each package.
This is likely the source of their commit based versioning complaint/issue, i.e the commits in question are probably https://github.com/NixOS/nixpkgs versions if they aren't maintaining their own overlay of derivations.
This is in contrast to systems that allow all of the versions to move independently of each other.
i.e in the Nix world you don't just update one package, you move atomically to a new set of package versions. You can have full control over this by using your own derivations to customise the exact set of versions, in practice most folk using Nix aren't deep enough in it for that though.
Put out fewer versions of things. It is entirely possible to write a piece of software and only change the interface of it at rare intervals. The best solution I can think of though would be to allow one version of a package to provide multiple versions of its interface. Suppose you want to increment the minor version number of your code and this involves changing the signatures of a number of functions, you could design a programming language packaging system such that both versions are defined in the same package, sharing code when needs be.
Which leads to the exact problems defined in this article. Many programs using many library versions. It would be much better from both a security and size perspective if these disparate packages could be served by a single shared object using versioned symbols.
Hmm, not sure I agree. Most of those arguments get populated by taking the fixed-point, along with any given overlays; so it's easy to swap-out a library everywhere, just by sticking in an overlay. The exceptions mostly seem to be things that just don't work with the chosen version of some dependency; and even that's quite rare (e.g. it's common for Nixpkgs maintainers to patch a package in order to make the chosen dependency work; though that can causes other problems!)
You can actually have both if you do it right. It's trivial to build a rust package with Nix from a Cargo.lock file, for example. Nixpkgs is contrary to bespoke version soup, but Nix itself can be fine with it.