Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GNU Guix 0.8 released (lists.gnu.org)
163 points by davexunit on Nov 18, 2014 | hide | past | favorite | 72 comments


Cross-post from Reddit to clarify what exactly Guix is:

GNU Guix is a Guile-using Nix-derivative purely functional package manager and at the same time package recipe database. (Shipped in one so the recipe language doesn't need to be frozen for compatibility.) Development of the Guile-based init system DMD is closely tied with it.

Its last few releases include stuff to install a stand-alone GNU/Linux-libre system which might be sanctioned as the GNU Operating System if RMS approves. Since it builds heavily on GNU software like Guile (GNU's official scripting/extension language) for its init system and package manager / deployment tool, prioritizes GNU packages in its database, and remains ostensibly kernel-agnostic with the hopes of running on the Hurd in the future, it can be seen literally as the GNU OS. (GRUB, GCC, Autotools, Glibc, Bash, coreutils, etc. all go without saying.)

(Until approval of the name GNU, the OS simply has the same name as the package manager: Guix.)

And homepage: http://www.gnu.org/software/guix/


Package management is probably, today, a very well defined problem. Isn't it time to solve this once and for all? Or are we forever doomed to have multiple package managers for the dot product of [operating system]X[development platform]?

It itches me that php has pear and composer, node has npm, ruby has gems and python has pip. It itches that deb and rpm solve the exact same problem, not to mention all the other smaller package formats/managers like pacman, yaourt or portage. It looks as pretty obvious that it is the same problem, albeit attacked via some different angles (source build versus binary distribution notably).

It even looks like it could be cross-platform, solving in one simple swoop at least unix systems (namely linux along with the BSDs) and, in a dream world, non-unix systems (windows, notably).

Daydreaming, yes. The status quo is too well established. But it justifies my jaded look regarding Guix: Late comer to a party that is already stale and over.


Nix is attempting to be the solution you're looking for. Look at the nixpkgs repository: https://github.com/NixOS/nixpkgs

You will see packages for tons of stuff, including:

    * Python packages from PyPi: https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/python-packages.nix
    * Haskell packages from Hackage: https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/haskell-packages.nix
    * Perl packages from CPAN: https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/perl-packages.nix
    * NodeJS packages: https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/node-packages.nix
    * Lua Packages https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/lua-packages.nix
All these packages in one place, all installed through the same system and automatically built and tested on the Hydra build farm. Nix users have developed tools which automate (as much as possible) the creation of these packages.

I really believe NixOS is the future of the Linux distro, and it's tragic that it's still relatively obscure. It revolutionizes the OS in a myriad of ways, getting everything into the same package repository is just the beginning. For instance, it's fully configuration managed out of the box, using the Nix package manager as its configuration mangement system. Puppet, ansible, saltstack, chef, etc. are all made obsolete. Sure, it still has some rough edges, but it's a great community to contribute to and is improving at a impressive clip. If you're interested in the other ways NixOS is better than whatever you're currently using, I refer you to the website: https://nixos.org/


Guix is based on / inspired by Nix, but using Scheme instead of a custom DSL.


> curl https://nixos.org/nix/install | sh

Installation instructions for nix. Although I'm not sure that arbitrarily running shell scripts from the internet is a good thing. When I install an OS X app from the internet, at least it's signed.


That script does a lot of work that boils down to "go to https://nixos.org/releases/nix/nix-1.7 and do the obvious"

Of course if thats not signed you still have a problem. Eternal chicken and egg issues. How do you install gpg to verify a gpg signature while verifying the gpg signature of gpg


How easy is it to build your own packages from, say, the tip of the master branch of github repository? How automated is building the dependencies?

Building packages on Debian like systems is a huge pain. Arch is much better but I need Debian/Ubuntu-like reliability.


Very easy.

    git clone git://github.com/NixOS/nixpkgs.git
    cd nixpkgs
    nix-build -A packagename
Building the dependencies is completely automated. For more information, see the nixpkgs contributor's guide: http://nixos.org/nixpkgs/manual/

Note that the above command will simply download cached binaries where they are available, but if you make any changes to the build scripts nix will notice rebuild locally instead. This gives you the benefits of both source based and binary based package managers.


Is there no Ubuntu 14.10 nix package?


Just one question: how much it takes to run Bumblebee, zsh and Steam on that?


> Package management is probably, today, a very well defined problem.

I don't think it is. I think "package management" is a name for a general class of solutions that addresses a range of problems (varying, at a minimum, along a continuum between "I want a way to pull a set of implementations of particular capabilities on any of a variety of different systems for which the appropriate implementations of those capabilities may be different" to "I want be able to reproduce from a list the same set of implementations across different machines".)

There's also a range of priorities about certainty of results versus freedom to mix and match and ease of contributing to the globally accessible repository.

I very much don't think the problem addressed by package management is "well-defined", and I'm not even sure it is coherent in the sense that were the problems different people look to package management to solve all collected, there would be a single workable solution that would address all of them optimally.


The imperfect analogy I've used is the comparison between paints and pencils. They perform similar general functions (placing something on a media) and even use a similar set of tools (a long stick held in the hand). But the end result and even the goal could be totally different (I may want to hang a painting on a wall in a frame and I may want my pencil drawings to be reproduced a book).

It's not perfect, but it helps non-users (i.e. managers) understand that one tool does not fit all.


Guix is not late in that there aren't many purely functional package managers out there. It's the second after Nix if I'm not mistaken, and it doesn't reinvent the purely functional wheel either; it builds on top of existing core parts of Nix. Why not use Nix? Most importantly because it's not GNU; NIH is sadly a real problem for GNU. But Nix also uses C++/Perl and a stand-alone package recipe language; Guix uses Scheme as much as possible. ISTR Guix also has some features that Nix lacks, such as unprivileged package management. Don't quote me on it though; mayhaps it's trivial to add to Nix or has already been done.

Cooperating with existing infrastructure is, I imagine, very difficult for a purely functional package manager, given the whole "purely functional" aspect. :-)

It's also extremely unlikely that Debian will just give up apt, and RedHad will just give up rpm.

And every programming language platform seems to have its own package manager precisely because of that reason: there is no one package manager on all systems, but a proglang platform wants to offer package management on all systems, so each bake their own implemented in and special-made for the language.

It's a sad state, but that's the world of software for you.

http://xkcd.com/927/


Late clarification because I wasn't careful and forgot that "NIH" has very negative connotations:

When I say "NIH is sadly a real problem for GNU," what I mean is that NIH can be an actual valid reason for GNU to fork or reinvent something, not that they're guilty for doing so. As in, it's unfortunate that they'd have to do it, but sometimes they have to. (Often it's not "reinvent" at all though, instead merely maintain an upstream compatible fork such as IceCat or Linux-libre.)

The reason is that if a non-GNU project is heavily relied on by GNU, the project might one day decide that they don't exactly agree with GNU anymore, so it's best to have a fork under GNU which can be fully trusted. Sometimes I find this non-trusting stance sad, but it seems to be necessary; just see Mozilla adding DRM to Firefox. :-(


Nix does have unprivileged package management via a CLI tool, though the absolutely fantastic describe-the-whole-system-in-one-file wonder that's configuration.nix[1] takes elevated privileges.

[1] E.g: https://github.com/Fuuzetsu/nix-project-defaults/blob/master...


>Guix is not late in that there aren't many purely functional package managers out there.

Why does almost anyone not involved in writing it care what language the package manager is written in?

>Cooperating with existing infrastructure is, I imagine, very difficult for a purely functional package manager, given the whole "purely functional" aspect. :-)

I don't follow. Why is that difficult?

>http://xkcd.com/927/

You can have different implementations of the same standard. To make your own standard is a step beyond.


Being functional has nothing to do with the language it is written in.

http://www.gnu.org/software/guix/manual/html_node/Introducti...

"The term functional refers to a specific package management discipline. In Guix, the package build and installation process is seen as a function, in the mathematical sense. That function takes inputs, such as build scripts, a compiler, and libraries, and returns an installed package. As a pure function, its result depends solely on its inputs—for instance, it cannot refer to software or scripts that were not explicitly passed as inputs. A build function always produces the same result when passed a given set of inputs. It cannot alter the system’s environment in any way; for instance, it cannot create, modify, or delete files outside of its build and installation directories. This is achieved by running build processes in isolated environments (or containers), where only their explicit inputs are visible."

Of course, this approach has some disadvantages:

http://www.gnu.org/software/guix/manual/html_node/Security-U...

"when a package is changed, every package that depends on it must be rebuilt. This can significantly slow down the deployment of fixes in core packages such as libc or Bash, since basically the whole distribution would need to be rebuilt."


Thanks for the clarification. Further clarification:

I need to refresh my knowledge before I'm sure, but I think that statement regarding dependencies only goes for build-time dependencies. An update to a run-time dependency wouldn't require a rebuild of the dependent.

And the build-time dependencies' up-to-date status is not very important most of the time. "Does it matter whether I used Bash 4.0 or 4.2 to run that ./configure script?" Nope. I'm not sure to what extent this carries over to practice so far; the recent security patches to Bash did cause a total rebuild in Guix. I hope that will change by the time 1.0 is released. Theoretically, you could use the same build-time-dependency packages for years (e.g. a "bash-for-building" package), yet quickly deploy bleeding edge versions to users (proper "bash" package).


> "when a package is changed, every package that depends on it must be rebuilt. This can significantly slow down the deployment of fixes in core packages such as libc or Bash, since basically the whole distribution would need to be rebuilt."

I'm just wondering, wouldn't it be possible to have an additional checksum for just the public API/ABI/headers of the package? Would this be enough to at least eliminate cascading rebuild for upgrades to shared libraries?


Oh, sorry, I got confused by the mention of Scheme.

Though that means the XKCD comic doesn't apply. This is a standard with a very specific goal, and it very much does not want to incorporate a lot of the earlier standards.


The comic was applied to the situation somewhat liberally. :-)

I guess you could say it's about de-facto standards here, rather than standards. Deb, RPM, pacman, Nix, etc., as well as all the language-specific PMs all solve the broad problem of package management, so one of them could become the de-facto standard and embrace the others, but it ain't gonna happen.


"Purely functional" here doesn't refer to the language the package manager is written in - it refers to the package management itself.

In Nix (and therefore I assume Guix), this means that each package is built in an environment that only contains the package's explicitly-declared dependencies (to prevent undeclared dependencies) and is not modified after installation. Therefore, the result of the installation is (or should be, anyway) a deterministic function of its dependencies and the install process. Nix thinks of a package as a function whose input is folders (containing its dependencies) and whose output is another folder.

This also means that Nix package installs should always be reproducible across any number of machines, even if the machines contain different sets of packages. It takes some of the benefit of using a single master image for the OS (i.e. CoreOS), but lets you mix and match parts of the system.

I remember looking at Nix briefly a while ago. As far as I can tell, the difficulty of this approach is building system-wide configuration files that need to know about which packages are installed (sadly I can't think of an example right now), because a thing like that can't be installed once, as its own package, and then left alone. However, simple executables and libraries are very easy to model in Nix.


I would argue that it's far from a well defined problem. deb/rpm have very fundamental differences from gem/pip/npm etc. CoreOS takes a very different approach to packaging, and so does Nix/guix.

If it truly was a well defined problem, everybody would be using RPM. All the package managers you mentioned were created because somebody perceived a problem or limitation with previous formats.


> I would argue that it's far from a well defined problem. deb/rpm have very fundamental differences from gem/pip/npm etc.

I know the internals of pear, pip, rpm and deb. I don't recognize fundamental differences between them, although this evaluation depends on the definition of fundamental.

At the very core, all of them catalog files on the filesystem, script configuration of files once written and trigger reactions from other packages to the appearance of a new package.

The key difference is one of domain. Development platform specific package managers do their operations from inside the development language, with the niceties that come about from that (better APIs to change the development environment). This is not fundamental.

As for being well defined, I state the problem as: If you lock in a room, for a couple of hours, the lead developers of deb and pip, would they be able to come up with a unified package manager? It's, for me, clear that they would. Rinse and repeat, and you get a full unified spec.


"It's, for me, clear that they would."

I highly doubt it. The deb developer is a sysadmin. The pip developer is a programmer. That's a big gap.

For example, you can install many different versions of a gem simultaneously, which is very useful to a programmer. A deb can only install a single version at a time. That's best from a sysadmin perspective.


You can have multiple versions of the same deb package. Look for installed Linux kernels on your system and you'll likely find an example right there.

This is actually a good example of the good definition of the problem space. All corner cases have been dug up. It's not a priori obvious that an operating system needs multiple versions of a package, but in some cases it does. And the package manager supports it.


You can have multiple versions of the same deb package. Look for installed Linux kernels on your system and you'll likely find an example right there.

Not so much - they have to have different names. I couldn't have a package named, e.g., linux-kernel and install version 3.4, 3.5, etc, instead I have to install linux-kernel-$version, each of which is a different package.


In the obviously far superior ;-) RPM world it's possible to have multiple "linux-kernel" packages with different versions installed at the same time.


You're getting caught up in implementation details. It fulfills the functional requirements in debian, shows that the problem is common to all package managers, it reveals that this corner case has been touched by deb and in the end reinforces my original points that:

a) The requisites for most package managers out there are not that different (i.e. it's the same problem).

b) We have the problem so beaten down that we should be able to properly define the solution boundary.

Is the deb solution suboptimal? Probably. It's not relevant to the main point being argued.


>You're getting caught up in implementation details.

But if a package manager is anything, it's an implementation. It's not a theoretical concept, it's a tool that a user has to smooth the process of their work. Both could just install and configure packages by hand but it's the implementation that makes a package manager useful (for one being able to run multiple versions and for another being explicitly discouraged from multiple versions). Dismissing implementation details in a tool that is designed for implementation is missing the point.


This is a pretty common point of view, but I wonder, how would this be enforced in a free-and-open software development ecosystem? Who would go around stopping people from launching their own take on a solution to the same problem apart from the "blessed" solution? In a world where everyone is free to choose which problem they'd like to solve and how to go about it, and it is impossible for everyone to agree about even the most trivial of options, you cannot ever have only one text editor, package manager, init system, etc.

Seeing how well most developers welcome change in their favorite OSes, editors, etc, I imagine a world where every developer bends their efforts towards the one true chosen solution, and are not allowed or able to choose to "waste" their efforts on another solution wouldn't yield as vibrant a software ecosystem as we have now.


> This is a pretty common point of view, but I wonder, how would this be enforced in a free-and-open software development ecosystem?

It wouldn't be enforced. The beauty of OSS is that there is no enforcing of solutions. Even when you touch something as core as the init system or the graphics system, the solution must prevail by technical merits. There is marketing, but thankfully this is a world of technical evaluation first, selling second and never one of enforcement.

Change is more difficult, but so are mandated blunders.


Nix (and by extension Guix) are meant to be this solution you're looking for. The problem, as with most unified solutions, is adoption.


Then that is the very first thing they must state. Instead, the opening line is: "GNU Guix is the functional package manager for the GNU system and a distribution thereof."

In addition, this problem can only be solved by luring established platforms, and this can only be achieved by providing a transition path. It should be able to coexist with other package managers (composer, or pip, or npm or, if shooting for the moon, rpm and deb).


Much like how many of us use pip, gem, npm, composer, etc. on our Debian/Red Hat/etc. machines, you can also use Guix in the same way. For example, you are deploying a production web application and Guix had newer versions of some dependencies than your stable host system. You could use Guix to create a profile (a collection of installed packages, a bundle in Ruby terminology) that includes those dependencies for your web application to use.

It's possible to use pip and friends on a Guix system, too.

I personally do all of my Guix hacking on a Debian machine. They play very nicely together.


Having multiple system package managers coexisting is pretty thorny. If very carefully delineated so they manage completely independent bits of software installed in their own prefixes, I could see it working (with some careful path management). But if you're actually installing/upgrading system software with it, having some mix of apt, rpm, and guix managing your packages sounds like a recipe for pain.


I'm not sure it really matters that rpm and deb solve the same problem unless you use different distros.

However npm, gem, pip, composer mostly solve the same problem, but a different one to rpm, in that we want per-project dependancies rather than per-system and want them to be commitable. Also many project combine languages so it would be good if these were combined.


It's well defined, but there's no good solution to this problem yet! See: http://www.well-typed.com/blog/2014/09/how-we-might-abolish-... It's true for every language out there, not just Haskell.


Don't forget the classic non package maintainer mistake of thinking policy = package file type/format.

Its pretty trivial to "force" a rpm to install on a Debian box. That doesn't mean that magically any of it will follow Debian policy, any of the paths will be correct, any of the dependencies will be correct. But technically files can be thrown onto the filesystem, even if they can't actually be used where they land. And vice versa, if you shove a deb onto a Redhat box.

My biggest "NIH" surprise is GNU has this and the emacs package system and at first glance they have nothing in common. The idea of installing a local copy of Rails or Leiningen off Emacs MELPA is kinda weird. I wonder if emacs twenty...something will use a user level version of this system. That would be interestingly consistent.


Guix is a big effort at making a purely functional package manager with a large recipe database and all kinds of bells and whistles you want in a full operating system.

Emacs's minimal package manager is to install Elisp libraries (and nothing else) on every platform where Emacs runs, including MS Windows, and you can be happy it does dependency tracking.

The two can't really be compared. Maybe in 10 years Guix will run on all platforms and have its components separated in such a way that it's sensible for Emacs to ship a version of Guix with it for Elisp package management, but don't hold your breath.

So I don't think NIH applies there.


I hope this solves the fad that every new programming language writes its own package manager.


Adopting a GPL'ed tool written in Guile to install packages for their own languages? I don't see it as likely.


GPL'ed means everybody can potentially use it. It's a stand alone tool the user has already installed, it does not have to be distributed, or liked against. And I don't see why these people wouldn't write a few lines a guile. Right now they usually write a few lines of shell.


I'm always confused to see this point brought up. Whenever there are lots of options, folks talk about the harms of fragmentation and wasted effort and focus. Whenever there aren't a lot of options, it seems like the same crowd demands disruption and competing choices.


Different people, probably. I'm all for consolidation whenever the problem space is very well defined, and divergence whenever the problem space is ill defined. This is a case of the former.


Package management may be defined but maybe not well named. More and more when I see package I see language stack, lexical scope and such...

Maybe the Systems vs Languages split is detrimental to the solution here.


>are we forever doomed to have multiple package managers for the dot product of [operating system]X[development platform]?

Did you, by any chance, mean Cartesian or direct product?


Mandatory XKCD comic: http://xkcd.com/927/


You beat me to it. :-)


Interestingly enough, the adoption of a new technology standard is very similar problem to that of the adoption of a new product. There is a learning/switching cost involved, and for someone to make the move the pain has to be absolutely unbearable, the new solution has to be 10x better, and the cost has to be as low as possible. We often end up in local minimums and paradoxically the only way to overcome those is to run competing searches (aka, multiple products/standards) or simply to restart the search. Then there is also the problem of existing solutions being already "good enough" (think python27, IE8, IPV4, dpkg, cars that run on oil, and electricity that comes from coal).


Purely functional package management (PFPM) gives us reproducible builds; other package management systems don't. 1/0 = ∞. Therefore PFPM is infinitely better than non-PF PM. ;-P

And my reasons for choosing Guix over Nix would be: fully-free GNU system, doesn't use systemd (no hate, just a preference), uses Scheme/Guile for everything from init system to PM.

(Add possible future Guile/Emacs integration to the mix, and that Guile might enter even more GNU software (e.g. it's in GCC, GDB, and Make already), and we get an interesting picture of a pseudo-Lisp Machine... I'm thrilled, personally.)


As someone that ran nixos for a time I wonder if this might allow me to proceed further - with nix(os) I had some issues grasping the recipes/the language. Scheme looks much more accessible to me.

Offputting is the strong focus on libre in the announcement. While I actually like that I often find that projects that emphasize their relation to free/libre software are .. failing in the real world. Need to figure out if I could actually install useful media codecs, flash if needed, heck - maybe decss or whatever I deem necessary.


> Offputting is the strong focus on libre in the announcement. While I actually like that I often find that projects that emphasize their relation to free/libre software are .. failing in the real world.

It's a GNU project. GNU is run by the FSF. The stated goal of the FSF is the eventual elimination of all proprietary software. None of this should come as a surprise.


Adobe Flash is an outright offense if you ask me. A distro providing a separate repo for non-free software is one thing, forcing me to run uninspectable code, which is known to have a horrible security record, under my user's privileges, is another thing. As OpenBSD people say, it's a feature that it's missing. :-P

I don't know what problem with media codecs there are. FFmpeg, as free software, implements H.264 just fine, even though it's a patented format. I don't know how exactly the legal aspects of that work, but I'm happy it somehow works, because otherwise I couldn't watch all my [spoiler]Chinese cartoons[/spoiler].

Now WiFi cards are a problem from what I know. You can buy a WiFi dongle with free software drivers though to get around that issue, or replace your laptop's WiFi card with one that has free drivers. I suspect that will be the one major issue with the GNU OS, and the only thing we can do is ask manufacturers of WiFi hardware to be more cooperative.

The other big problem tends to be the BIOS, but GRUB/Linux-libre at least boot just fine on proprietary BIOS, so it's no issue for those who don't care. Those who want to take the further step for software freedom may want to look into LibreBoot: http://www.libreboot.org/


Well, let's put flash aside. I'm not a fan and Shumway works for most stuff that I need.

Dropbox? BTSync? (I .. don't care about the latter, and try to replace the former with SyncThing atm, but I'm trying to make a point: It's more than just a single binary blob to watch cat movies or porn in a browser)

What I really like to install is Steam.

What I'm trying to say here is: If there's no middle ground, no way to fulfill my needs, the system just isn't for me. I might support the idealistic idea, but I cannot (or, more precise: I don't want to) work with a number of limitations.

I didn't even think about wifi or anything hardware related. If that doesn't "just work" it's again impossible for me to use the project and I need to bury the idea of trying Guix right next to my failed attempt to run nixos as my host system.


I like the idea a lot. However what's lacking is good package repositories to go with it. Nix has exactly the same issue.

A question I have is why don't they use the Debian / Fedora repositories? Is there some crucial meta-data missing? Are the build script for .deb / .rpm made in such a way that it is not possible to add multi-version support on top?

To clarify, I'm not asking them to interoperate with apt-get/yum, just to use the package repositories as a source.


>why don't they use the Debian / Fedora repositories?

They are fundamentally incompatible.

Among other reasons, Debian and Fedora package builds are not reproducible. That is, given the corresponding source code to the package, you're not likely to get the same binary if you build it yourself. Sometimes maintainer uploaded packages may not build at all for someone else, because the binary built on the maintainer's machine relied on some extra software not specified in the build recipe.

Nix/Guix are source-based distributions. They build packages in an isolated chroot, in which only the explicit dependencies of a package are available. This means that no user must rely on a server to give them binaries. Any user can opt to build any package on their own system instead, and the result will be nearly bit-identical to what's on the build farm.

Hope this helps.


"Nix/Guix are source-based distributions. They build packages in an isolated chroot, in which only the explicit dependencies of a package are available. This means that no user must rely on a server to give them binaries. Any user can opt to build any package on their own system instead, and the result will be nearly bit-identical to what's on the build farm."

While the package format RPM lacks insurance of this process, the culture of Fedora/RHEL is that packages are built using mock (as part of the koji build system), which does build everything in a chroot. And, if a package would build differently (such as pulling in different libraries, for instance) for someone using the same tools it is considered a bug and will be fixed by the maintainer.

I don't know how specific guix/nix get in this regard: Do they specify the build dependencies down to some sort of file-based checksum level? Or will they build against any compatible version of a library? If the latter, how is that different from RPM when using mock? One could just as readily end up with a very different build of a package, based on which library version is available in the repo against which the package is being built.

One could lock down the remote repository to create predictable builds (i.e. only use the Fedora version 20 repo, with no updates; or the Guix version whatever repo with no updates), and presumably either would produce very predictable builds (I know the Fedora case would).

So, what do you mean when you say "reproducible"? Is it really specific and unforgiving of new versions of things? Or, does it allow for newer versions easily? RPM allows one to specify things with extreme specificity (down to the exact version and package revision), but one rarely does that if the package will build successfully with a range of versions.

I'm not sure I'm convinced this is a unique feature of Guix/Nix, is what I'm trying to say.


>Do they specify the build dependencies down to some sort of file-based checksum level?

Yes, Emacs 24.4 built with X support has a different hash than Emacs 24.4 built without X support. They are completey different in the eyes on Nix/Guix.

>So, what do you mean when you say "reproducible"? Is it really specific and unforgiving of new versions of things? Or, does it allow for newer versions easily?

In Nix/Guix, you have to specify the exact dependencies, nothing is implicit. Packages are built as a pure function. A package's identity is the hash of all its build inputs: source code, build system, and other dependencies. Bumping the version of a dependency triggers rebuilds of packages that depend on it.

I know of no other package management systems that treat package builds in this manner.


"In Nix/Guix, you have to specify the exact dependencies, nothing is implicit."

Do you do psuedo packages, like the Debian model of the build-essential package which pulls in a standard cloud of normal stuff, no need to list libc6-dev by name because that's a dependency of build-essential, but if you want libpcap-dev you need to pull it in specifically by name because its not in build-essential?


While the phrase "nothing is implicit" was used, there actually are so-called "implicit inputs" in Guix. :-P Each package uses a certain "build system" which can be e.g. "gnu" which has e.g. autotools and bash as "implicit inputs" so every package using that build system gets those dependencies.


This doesn't really sound like a feature. Or, at least, not a feature I've ever wished I had.

From a purely pragmatic perspective, it means huge swaths of software have to be rebuilt any time anything gets updated. And, it's become clear now to me that every user is building all their own software (as with ports or Gentoo ebuild, both of which I consider unacceptable package managers for servers for that very reason)...given that, and given that rebuilds now must often span huge piles of software, occasionally with exploding dependency chains, I'm feeling a little queasy imagining trying to administer such a system.

I understand the appeal, from a security perspective (and particularly in a world in which the NSA and other highly capable attackers are willing to exert resources to compromise software at the vendor level), of being able to duplicate a package from source. That's cool. But, why force every user to do so every time they update their software? Am I missing something? Are there also binary packages and being able to verify package builds is optional?


Users do not need to build everything for themselves. They can download binaries from the build farm, just as if they were using a binary-based distro. The system is transparent: If a binary is available from an authorized remote machine, Guix will pull from it. If not, Guix will build it locally.

To respond to security updates to core packages a lot faster, we are using a technique called 'grafting' that doesn't involve full rebuilds.


It's true and it's an interesting problem, maybe in the future a package identity will only be partially dependent on some inputs, the other inputs being second class variables that don't affect the build the same way. Kinda like the difference between parameters and variables in math. Fa(x) is still F even if a = 1 or 2 ,....


Thanks! Would it be possible to have a declarative (mostly, I'm interested about multi-versioning and rollbacks) using the .deb / .rpm binaries instead of the sources?


Can you clarify what's wrong with nixpkgs? https://github.com/NixOS/nixpkgs

Granted, it's not yet perfect, but my experience with it so far has left me very impressed.


Yeah, I think there is metadata missing from those repositories. Nix uniquely identifies a package by hashing all of the inputs that went into building it.


Boot to Guile FTW!


So it's a Scheme layer on top of Nix?


To some degree, yes. Guix is compatible with the low-level Nix package format, and it currently uses the upstream Nix daemon. Everything else has been replaced. Guix uses its own client-side and build-side code. The Nix language is replaced entirely with Guile Scheme, build scripts are Guile programs instead of shell scripts, the UI is different, etc.


On top of the Nix daemon certainly, though not Nix as a whole, which has its own package recipe language etc.


Thank goodness, I was concerned that we didn't have enough package management options!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: