What do you find maddening about micro packages? I hate depending on a giant stack of frameworks (slowing installs, builds, adding behavior surface area) when I just need a function or two, so micro packages are a joy for me.
Reading the article it seems to me that this micro packages approach is what slows down everything.
I never, ever have seen all these problems in even not-so-state-of-the-art dependency managers like maven or nuget.
Seriously up to now it was impossible to have a build server isolated from the internet if you didn't want to check-in all the dependencies???
Simply crazy. I really can't understand how people can even think to use a dependency manager system that doesn't satisfy the essential requirement of having your CI server sandboxed.
> this micro packages approach is what slows down everything
Not really. The npm client is just flaky, very slow, buggy, has some major design flaws in its v2 incarnation, and has a completely different set of major design flaws in its v3 incarnation.
The core architecture works fine for small packages.
> Seriously up to now it was impossible to have a build server isolated from the internet if you didn't want to check-in all the dependencies
Of course not, although I admit the linked article implied it was if you didn't read closely. There's a wide number of solutions, including running a private registry, or a copy of the public registry, or a local cacheing proxy of the public repository, or a custom npm client that hits a local cache, etc. Some of the solutions work quite well, and in fact yarn itself is just a re-implementation of some existing solutions.
> I really can't understand how people can even think to use a dependency manager system that doesn't satisfy the essential requirement of having your CI server sandboxed.
As the article noted, Facebook was sandboxing their CI server and using NPM; they just didn't like the existing solutions. (Nor should they; they were a bit naff.) But that doesn't mean (nor is anyone claiming) that there were no existing solutions. Yarn looks great, but it's an extremely incremental change.
Javascript is very easy to work with if you just stick to micro packages. What slows it down is that the language has been forked to an incompatible spec (ES6) so there is essentially a hard fork in pool of libraries. To deal with this—and calm anxieties around lack of inheritance, trauma over having to type function over and over, etc—layers of pre-processing have been added: JSX, SASS, asset compilation, compatibility layers, framework booting. This pipeline distances developers from the DOM and other low-level protocols, increases the surface area of the build process, and frequently disables debugging and introspection tools. In addition megaframeworks like Angular, Ember, and React add mandatory layers of indirection and runtime dependencies between each snippet of code, introducing behavioral complexity and confusion from non-deterministic builds.
All of this is why people are excited about Yarn, but to me it's a band-aid on several architectural mistakes.
Just say no to frameworks. Just say no to ES6 and transcompilation. Just say no to asset pipelines. Viva la JavaScript. Viva la Netscape. Viva la Node.
Where would a CI server retrieve dependencies from, if not either over the network, or from within the repository? Do you keep a module / library cache on the CI server itself? In other words, what do supply as input to your CI process, besides a Git URL?
In a local nexus/nuget repository in the intranet, for sure I'm not allowing internet access directly from a CI server.
And apart from the obvious and scary security concerns it is also much faster.
Got it. You can also set up an NPM repository like that, but I don't think that that is commonly done by small dev teams. This does lead to hilarity like the left-pad clusterfork.