I've thought the same for a while, and now I'm looking to Nix[1] as a solution. Upside: strong fundamental design, extreme reproducibility, great Haskell support. Downside: small community, middling OS X support, essentially non-existent Windows support.
It's still squarely in early adopter territory and the documentation is woeful, but if you're willing to put up with that it really does feel like the "one true package manager" I've been searching for. I've been using it at work and while the setup was painful and some things are still awkward (ie statically linking Haskell executables), it's been incredible overall. The up-front work in learning the system and getting it running has paid off already, and it'll pay off even more as I continue using Nix for other things.
I know at least three reasons why makefiles are not working well for common use cases. Many people seem unaware of these cases, even though they seem to come up quite often.
• make(1) has a hard time rebuilding target files when source files have changed. The problem is that users have to declare the dependencies of a target before the build, but due to file search paths, it can be that dependencies are only known after building a target. To know about all dependencies, make(1) would have to build any target at least twice.
• make(1) by default does not rebuild a target when its build rules change. While this seems really weird to me, it seems to be a consequence of having a single makefile. When a build rule is changed, the makefile changes. So should all targets depend on the makefile implicitly? One could argue that they should – but then each change in a makefile would rebuild all targets.
• make(1) can not handle non-existence dependencies. Imagine a file foo.h being searched for by a compiler in directories bar and baz (in that order). If the header file baz/foo.h is found, then bar/foo.h should be considered a non-existence dependency: If at any future point in time it exists, the target should be rebuilt.
I think that all of these are limitations of not only make(1), but all utilities that expect dependencies to be fully known before a target is built. What makes you think they are not?
How would you solve all three problems I listed using make(1)?
A makefile consists of rules. Each rule contains a dependency line which defines a target and an enumeration of prerequisites. This means that the dependencies have to be known before the target is built. By design, it is impossible for a single-pass make(1) invocation to derive dependencies for a C program, as dependencies are output by the compiler.
By contrast, redo builds the target first and then records what was used to build it. For example, when compiling a C file with “gcc -M”, gcc will output dependency information. With redo, you normally record those dependencies after the target has been built. With make(1), that information has to end up in the makefile somehow, possibly leading to further builds.
I mentioned m4 before as a way of using more passes, and is how the Linux kernel approaches this, but looking deeper at make, I'm not even sure you need it.
Because you mention `foo.o' but do not give a rule for it, make will automatically look for an implicit rule that tells how to update it. This happens whether or not the file `foo.o' currently exists.
I do not understand. I know that make(1) has many implicit rules – but which of the problems I mentioned does this solve and how? Please be specific about your solution.
Because "limiting" has a broader meaning than what is merely possible with enough effort. The experience of practically everybody who hasn't already put in the effort to learn make is that they get much further much faster in more modern build systems - indeed, framework specific build systems often do exactly what you need them to do with no or very little configuration at all. That is a feeling of not being limited by the tool.
Different tools specific to their job, they wouldn't be good to combine together. apt-get only exists for linux, requires root and is designed for that, npm doesn't. apt-get installs the latest version, npm expects more specific verisons since it's for code. npm is built for node's ecosystem and apt-get is better for system packages.
Welcome to dependency hell! Working for years in different "php environments" I learnt that most (web) developers I met in this field tend to use as much new fancy tools out there without thinking about business value and longterm impact. At the end of the day you might end up with more than four dependency systems...
To respond to your initial question if we really need so much dependency systems: No, we don't need to!
As the author already needs to use composer to set up PHP dependencies why not use composer for the other packages required? It's not a problem to get the common needed libraries via composer which dramatically reduces dependency complexity, deployment and build scripts because you don't need that much tools.
npm advantage (or disadvantage depending on who you asked) is the project owner has direct control over release cycle. The owner can decide when to push an update and it will be available to everyone immediately regardless of their operating system.
Why can't npm be turned into ppa:nodejs and pip be turned into ppa:python?