The io.js fork and subsequent merge back into Node.js, including the birth of the Node Foundation, has been one of the best examples of the power of open source I have ever seen in action.
The situation went from bad, to worse, to the best possible outcome, and that's remarkable to say the least.
Congratulations to everyone involved and thank you for the hard work.
As somebody who uses Node only at the periphery of my role (for things like the Zombie browser, and SASS parsing) the massive instability has put me off from relying on it for anything core.
Documentation is quickly out of date, libraries seem to have incompatibilities which only become obvious when they don't work, there's no 'recommended' approach for even basic tasks.
I won't pretend I'm speaking for everyone, as clearly there are lots of people doing great things with Node. However, it doesn't seem to be a language like Ruby or Python where I would feel comfortable using it for a small but important project.
I'm always grateful to everyone who works on open source projects, but I wouldn't hold Node up as a great example of the open source community.
Instability is the very thing this merge and the foundation seek to address. Where there was once an uncertain future, there is now a Board of Directors and a Technical Steering Committee bound by a Open Governance Model.
> It promises a stable future, so let's say I'm being cautiously optimistic.
This doesn't make sense in light of your initial comment. One of the big goals of IO was to move to semver and address unpredictability/instability that you describe.
If anything, the merging of Node and IO should give you confidence that project governance will be on the right track moving forward.
Also, when you consider the advancements in transpiled js, the js/node community is looking very wise compared with other language maintainers.
I'm not sure you're not trolling, but in case you have serious concerns, node.js 4.0.0 incorporates the history of io.js. io.js defined 1.0.0 in early 2015.
I mean, Ruby was created in 1995, Python in 1991. Node was first released in 2009. That's not an entirely fair comparison since, of course, JavaScript was created long before Node, but Node introduced the language to a new environment, and a lot of complications needed (and in some cases, still need) to be sorted out.
As for no recommend approach - that's deliberate. I don't think most people working on Node want a Rails equivalent.
> As for no recommend approach - that's deliberate. I don't think most people working on Node want a Rails equivalent
This is a useful way to understand what Node is. Node's core API is for building any sort of network service: A DNS server, an SMTP server, HTTP server, socket listener, etc. and so it applies to a wide and varied set of problems. Rails is for building a very specific type of database-backed web application.
Exactly. I don't think Node is a good fit for monolithic CRUD apps. These are already solved problems in various other languages. Instead Node lets you compose small purpose-built modules.
Now out only problem is how to build decoupled/distributed micro services.
I'd argue only Erlang and Elixir are going to survive into the multi core multi server future, but that would be little more than a slightly educated opinion unless I take the time to write a very large explanation with cited references. I don't have time for that this week.
Personally I prefer having a monolithic app at the core and using Node as the glue to tie everything together (e.g. to provide "real-time" features or to integrate third party services).
I think what's holding Erlang back is mostly its rather obscure syntax. Only a mother could love that.
We are an (almost) completely node.js team and are having quite a bit of success using it for microservices. gRPC makes communication fast and easy, while Docker and Kubernetes make deployment and composition just as simple.
I agree - I think I could get away with arguing that Ruby became popular because of Rails (that may not be completely true, but I and a lot of other devs I know learned about ruby because of rails at least). Node, OTOH, has no central framework that people learn of and then say, "oh, I have to learn node to use XYZ".
There are plenty of libraries that provide whatever level of support you want, but one big reason people seem to use Node is for the ease of the DIY process. Honestly, I don't need an all-in-one system, because my system has needs that aren't easily modeled in ActiveRecord (or at least weren't during Rails 3 days when I moved over to Node). Instead, I am doing mix-and-match with smaller libraries to help database access, realtime (websocket) services, OAUTH, and a trad REST backend.
I also think it's fair to compare the release of node and the release date of ruby or python - it was really only then that we can start talking about needing frameworks for building web applications that run in the javascript language, whereas the other two languages could have done so right away (or ... later, once the web application world became more than just a cgi gateway on top of a c++ stack, like the lunch menu randomizer I had running back in 1995). Still, no matter how you slice it, node and server-side javascript are still relatively new on the scene, and not necessarily well-suited for an all-in-wonder, highly-opinionated framework.
> ... Ruby was created in 1995, Python in 1991. Node was first released in 2009. That's not an entirely fair comparison since, of course, JavaScript was created long before Node, but Node introduced the language to a new environment...
Actually, no, it didn't. JavaScript was first introduced "on the server side" about 20 years ago in the Netscape web server[1].
We learn from history that we learn nothing
from history.
George Bernard Shaw[2]
The JS of 2009 was an entirely different language from the JS Netscape originally created. Plus Netscape's web server was an evolutionary dead end and a proprietary closed-source product.
Node was in part influenced by CommonJS, which tried to unify the various competing but relatively obscure (mostly non-browser) JS environments. To say that it learned nothing from the history of SSJS (or JS environments in general) is, frankly, wrong.
That said, Netscape's SSJS is entirely unrelated to Node or anything like it[0]. Node lets you write application servers, Netscape's SSJS effectively was more like Rhino-meets-JSP or maybe GWT (i.e. the heavy lifting was apparently implemented in Java, not JS).
With utmost respect, I respond to each point you mention below.
The JS of 2009 was an entirely different language
from the JS Netscape originally created. Plus
Netscape's web server was an evolutionary dead end
and a proprietary closed-source product.
The reason I cited Netscape's server was to unequivocally show that NodeJS did not introduce JavaScript to "a new environment" as the GP stated. Why previous efforts failed to gain traction or if it is even a good idea to use JavaScript to define server-side logic is left as an exercise to the reader.
Node was in part influenced by CommonJS, which
tried to unify the various competing but
relatively obscure (mostly non-browser) JS
environments. To say that it learned nothing
from the history of SSJS (or JS environments
in general) is, frankly, wrong.
It would seem we have differing perspectives of history. CommonJS was started in 2009[1], the same year as NodeJS[2]. The lessons of history I implied regarded attempts to use JavaScript as a server-side solution over the past 20-ish years. While you make the distinction:
Node lets you write application servers,
Netscape's SSJS effectively was more like
Rhino-meets-JSP or maybe GWT (i.e. the heavy
lifting was apparently implemented in Java,
not JS).
I suggest that this is irrelevant, as the details of how the JavaScript logic is executed is orthogonal from the fact that systems such as these use JavaScript to define behaviour itself. The fact remains that there have been more than one previous project/offering/effort which had JavaScript as a key component. IMHO, it behooves interested parties to do whatever postmortem examinations possible before committing to the same or significantly similar path.
I don't think how Netscape's SSJS worked is irrelevant at all. There is a huge difference[0] between Node as a JS environment for writing applications and Netscape's SSJS as an environment for writing "server pages".
Node is for writing small standalone services. Netscape's SSJS was for creating dynamic web pages -- just like CGI scripts or (at the time) PHP.
SSJS may be considered prior art for "JS on the server" but it's barely recognizable. As far as I can tell the execution was entirely synchronous; but Node is defined by its evented -- and therefore asynchronous -- I/O. It's a data point for comparison but it's not a good lesson to learn from, other than what not to do.
A lot of the shortcomings of Node come from the limitations of JS at the time. Callbacks and streams (arguably two of the biggest issues with Node) are a result of Node being asynchronous. At the time, event listeners and callbacks were the best tool JS offered and the best thing the JS community had.
I can only imagine you're trying to say that Node should have taken more inspiration from other languages that solve problems better (like gradual typing, or the Maybe monad or the actor model). But none of this has anything to do with Netscape's SSJS.
You explicitly point out Netscape's SSJS as a part of history Node should have learned from. What specifically do you think Node should have learned from it? Are you just trying to say "SSJS is a bad idea (because Netscape's SSJS was bad) and shouldn't have been attempted again, even in an entirely different way"?
Sure, and after that we had Rhino and Adobe has been using JavaScript for a very long time as a scripting language (ExtendScript) and there are probably many other examples of JavaScript outside of a browser. But I don't see what that has to do with untog's main point.
I saw "untog"'s post as having two points, to which I responded to the former. You identify the latter point as being the main one and that's perfectly fine.
Just different importance discerned from the same post IMHO.
How long have you been waiting to use this pedantic comment and the obnoxious condescending quote about it to prove someone wrong and make yourself feel smart?
This has nothing to do with the parent comment about node deliberately differing from rails, it's typical hn/reddit pedantry.
I believe that Node can and will become more stable. It feels like it has too much momentum not to. But until it does achieve stability, it's difficult to use it for anything serious without investing a lot of time into it.
Perhaps Go is a better comparison than Ruby and Python.
in fact, to my surprise, I've just tested our app (v0.10.x in production) on v4.0.0 and all our unit tests pass, the server works...
I'm amazed. A single module had to be bumped (Elasticsearch@8) while we directly rely on 50+ third party modules... Our npm-shrinkwrap is 2600 lines long.
That's not really a sign of instability to me, but rather great work
mongodb and redis are two compiled modules and we didn't have problems with those. Honestly, there may be others, I don't even know!
We try hard to keep our dependencies up to date to limit the risks of an upgrade. We progressively apply dependencies updates to dev->staging->sandbox->production environments.
Was I worried when node.js forked? sure! but the situation is much better now. I think node.js can move forward smoothly now.
You seemingly have much more experience with Node than me. Perhaps my amateur anecdotes are just bad luck, but certainly my Bash scripts that talk to Node break frequently.
I think the npm project should document best practices, like when to use "*" version, when to use npm-shrinkwrap, etc... to limit problems.
NPM is a very powerful tool. In fact, it's our deploy tool: we run `npm install` on servers (private Sinopia npm repository) to deploy. But to do that, you must follow many many rules that are written nowhere.
We used to do this and it caused a lot of issues. Shrinkwrap is a must to keep versions stable. NPM and network unreliability can cause failed installs, as well as other issues.
Our first major improvement to this work flow was a tar of the app with all of the dependencies coupled with `npm rebuild` after unpacking. This worked quite well.
However, recently, we have switched to using Docker to generated this sealed packages which is working great.
We've never really had a problem... Our sinopia npm server is probably isolating some network problems, plus we don't start/update servers that often.
Our NPM workflow also ensures builds are easy to replicate. Most problems in development occur because of obsolete versions of some packages (our apps contain many private modules)... Clean npm-installs shields us from that in production.
it's difficult to use it for anything serious without investing a lot of time into it.
Isn't that true for many platforms?
Either way, I have multiple apps running Node in production, and I've never had the issues described. You don't have to be as breakneck as Node itself - some things I have are still running v0.10x just fine.
I use Java/Spring, Ruby, and Node.js in production, and Node.js is by far the hardest to stay up to date with without introducing major problems. The only framework I have used that is harder to update is Finagle.
It's not just Node itself to blame. There seems to be a culture of breaking things in the Node.js ecosystem. Breaking changes in third-party libraries are common. I have to specify exact dependency versions in my package.json, and almost every time I update one of them, something goes wrong.
In my experience, Ruby (particularly rubygems) is quite a pain in the ass with regard to package upgrades, especially when the previous developer/ops has leveraged rvm to install multiple combinations of ruby/gem versions. I can't tell you how many times I've had to trace down problems in a project because a single gem fails to upgrade/pull/compile/resolve on 2 out of 4 machines even though the project has an identical gemfile and available ruby version on every server.
I contrast this with node/npm where my projects seem to behave identically across deployments (and in some cases, when in a bind, I've even been able to simply tar & scp the project src and have it run without further effort beyond installing node, something I've never been able to do with ruby).
have to specify exact dependency versions in my package.json, and almost every time I update one of them, something goes wrong.
That's where semantic versioning comes in. Most libraries follow it - if the major version number changes, it's a breaking change. If it doesn't, you're safe to upgrade.
Semantic version doesn't solve compatibility problems, it just makes it more transparent. In some ways semver encourages breaking changes that otherwise might be avoided because, how, a major version bump gives you sufficient warning.
There are some major libraries that are on version 15 or greater... there's nothing good from a user's perspective about having a library they use do a major breaking release every other month.
What's worse is many libraries take advantage of "it's the wild west until 1.0" rule in semver and simple never release a 1.0.
I would say that the ROI is lower for time spent learning Node. I've found that with all the other languages I've learnt, in a couple of days I've been able to build something useful, leave it running in production and reuse skills learned a year later.
With Node, even the name of the language is not stable! A few weeks back I went to install Zombie, and now I have to install iojs and not Node. Assumedly it's now Node again. If you live and breathe the language, subscribe to the mailing lists, go to the meetups, etc, these things might be obvious. But when you're just dipping in and out, it really dents your confidence in the language.
Your latter paragraph seems a bit fuddy as people 'just dipping in and out' would never have to have contended with the iojs 'name' any more than Python/Ruby/PHP people have to switch to a different VM every time one is announced (e.g. PyPy/Rubinius/Hack).
I'd agree with your first paragraph in that Node is a bit lower-level out of the box and probably takes more effort to get started building a web-app, but I think the time invested jumping in at that low-level is actually useful vs diving in at a higher 'framework level' with some of the other popular web languages.
Also it does have superior installer/package-manager/deployment stories compared to say ... Python & Django, which does have an impact on ROI.
> Your latter paragraph seems a bit fuddy as people 'just dipping in and out' would never have to have contended with the iojs 'name' any more than Python/Ruby/PHP people have to switch to a different VM every time one is announced (e.g. PyPy/Rubinius/Hack).
One of my main use cases for Node is Zombie, and that required io.js after v3. No mainstream PHP, Ruby or Python application has required a different interpreter.
The io.js situation was more akin to Python 3 vs Python 2 than Python vs PyPy. It might not have looked like a major difference at first glance, but there were notable significant, which came with some breaking under-the-hood changes.
Actually, IIRC it's the exact same reason as the io.js split. The maintainer of jsdom had changes that weren't being merged into node's VM module.
So jsdom 3.x used a lib called "contextify". When io.js was released and the changes included, node support was dropped. Now node 4.0.0 is out and has those changes, jsdom now works on node again.
It was. The maintainer of jsdom decided to no longer support node after 3.x and made the library throw a condescending exception if you tried to run it with node. The docs were also filled with childish "tm" symbols whenever node was mentioned.
I see things like this all the time and it really puts me off OSS.
> I don't think most people working on Node want a Rails equivalent
It wouldn't really be possible anyway. Async programming is hard and the shortcuts one can take with synchronous Ruby are impossible with Async Javascript. Nodejs has a "fibers" lib though , but it doesn't look like it is widely popular, but AFAIK it is the only to truly abstract async programming.
ECMAScript 7 has brought true async/await into the language, which currently works in Babel on top of Promises. As far as a regular user is concerned, define a function as "async" and then "await" on the result you're after inside of it. Hardly difficult, in my opinion!
async/await is nice syntax, but it doesn't change the fundamental difference between synchronous and asynchronous code and the viral nature of async functions. Sync functions cannot properly compose async functions in general.
no because you can't write a library which apis are thunks. And your code needs to run in a generator, something you omitted. Fibers are a better alternative , no need for yield and generators everywhere.
Well, I was earnestly sharing run-of-the-mill Koa code since you made async abstractions sound elusive. Didn't realize you just had an agenda.
The thing is that people are actually using Koa/co to solve this problem. I've yet to see someone use fibers as their core webapp control flow abstraction.
What agenda do I have ? I have none, I'm just saying using generators do not abstract async calls. The only way to do that is by using fibers. I'm not pushing for anything, i'm just saying that something like rails couldn't be written in nodejs , (and no sails , is nothing like rails in terms of complexity ).
I think there are two separate things in your comment: the recent instability is due to the leadership power struggle that has been resolved, but the "no recommended approach" problem is more the result of a philosophy that encourages tiny pieces, new ideas, and reinvention, which is both a blessing and a curse, and seems unlikely to change.
You can have both. An 'officially recommended approach' does not preclude 1000 unofficial but working approaches. However, it does it make it a lot easier for someone dipping their toes in.
With Node for core and basic problems, I find myself thinking "Should I follow this blog post written by a highly respected developer but which is a year old and therefore forever in Node-land, or this blog post written last week by a nobody?". I don't like thinking that way about production code I'm writing for clients, so I use languages where I don't have to think that way.
When you say languages, do you really mean frameworks? It seems like languages are fairly general purpose and don't go out of their way to tell you how to build your application. Node is more in the "language" camp than the "framework" camp, and there are bunch of frameworks on top of it that give you the structure you're looking for.
Anything. I'm a pragmatic developer who just wants solutions that help me pay the bills. It's precisely for that reason that Node has been a problem for me - it's caused much more than its fair share of headaches, when compared to Ruby, Python, PHP, Bash, Go.
Perhaps using a framework would make things easier, but I'm reticent to rely on it in the main part of the project when it causes so many problems just in the small parts.
Can you give an actual example of the 'so many problems' you describe?
Your previous comments address the paradox-of-choice idea, but the uncontroversial path in Node is to 'just use Express' right? How it that a worse situation than in other languages where there are also trendy frameworks competing against the default option. You name PHP as a better situation but from what I gather PHP is even worse at casting off legacy frameworks. Laravel is the new trendy, whereas it was CodeIgniter/Symphony before that and EE/Wordpress before that. And in Ruby, there's similar competition over an alternative to Rails [0].
Sure. There was the decision a couple of years back to stop using self signed certificates. I spent a couple of hours wondering why a server I was running that compiled SASS on deployment suddenly stopped working, and it turned out to have been this.
A more recent example comes from Zombie, which I use for functional testing. When they moved to v3, they required io.js instead of Node. So eventually I decided to go with Zombie 3 and moved to io.js, but then another library broke which required Node instead of io.js.
They're the two big ones that stand out, but there have been lots of other small annoyances.
True enough, but they didn't tell anyone they'd done it for hours after they pushed out the changes. A notice on the terminal after the update should have been possible, or at least a big message on the homepage. But nothing - only an angry GitHub issues queue with random suggested causes and fixes.
In our shop we actually use all of Ruby, Node, Go, Python and Bash (in about that order of code volume). Honestly, for instability of 3rd party libraries Node is pretty bad, but Ruby is not really far behind. I am sure that I have spent more time debugging other people's code in Ruby than in Javascript.
Having said that the IO split has been a royal PITA. We got pegged to Node 10.x because JSDom only supports IO and Jest (which is what we wanted JSDom for) only supports Node :-P. I've been waiting a year for it to get cleared up.
It sounds like we made the right choice, though, because in this thread the people who are complaining about Node seem to be the ones who tried switching back and forth between IO and Node. There is actually only one bug in Node that has impacted me (related to the way connections are shut down) and it isn't fixed yet in master so not upgrading hasn't really impacted us. We just have to live without some of the ES6 features.
Personally, I'm a big fan of Go (old C++ programmer). We started using it a little over 2 years ago. The language definition has changed in that time, so I wouldn't call it stable ;-) The core libraries are rock solid, though. We barely use any 3rd party libraries (I think we use 2) so it's not really fair to compare that with other languages.
Although we have a fair amount of python in our shop, I don't seem to work on those projects very much (a month here and there). I can't say too much about it other than our Python code breaks due to 3rd party libraries more often than any of our other code. However, I think this is probably due to injudicious choices for our 3rd party libraries ;-)
Overall, I would say that my experiences don't match yours, but I wonder if it's because we never tried moving to IO.
Software development is not about chasing the latest shiny fad or trend on the market, it's about solving real-world problems by researching available solutions. So, if you think that Node doesn't fit or not equipped enough to solve your problems, you're welcome to check other solutions till you find the one most suitable to the problem at hand.
So, if you think that Ruby or Python could get things done for you and you're very familiar with them, go for either and get it done but to pick on Node just because you don't quite grasp its inner workings is really odd of you and the whole reasoning behind your critique is rather absurd.
It's not absurd to say "I'd like to use Node because of its performance and because it's easier to hire for, but only if it were more stable and had better documentation". As somebody who has to consider trade-offs between short-term and long-term costs for clients, these are very real problems for me.
Node's documentation really is terrible. The fact that it doesn't show what arguments functions take, what types are expected, etc. is pretty sad in 2015. The documentation is barely better than your typical side-project readme.md file, at best you get 1 example of an api's usage (always the most basic use-case only, of course).
Preach. I inherited a Node project for which Rails was a much better solution for our use case. It's been fun learning a new platform, but for our needs Rails would have been fine. The only conceivable reason I can see they chose Node is because it's cool.
Doesn't preclude them, but doesn't encourage them, which was the philosophy. An officially blessed approach would deprive alternatives of oxygen (though to be fair, it might also enlarge the pie, by attracting extra developers like yourself).
The flexible experimental approach inevitably conflicts with the constraint of back-compatibility (e.g. of x86, java).
Node seems more like the Drosophilia-like JS frameworks.
I can't reply to the grand-child of this post, so here's my reply.
It seems that you don't like Node and are intent not to use it. I respect that and am not trying to push it. I think asynchronous programming in the reactor pattern is a good solution pattern for many types of problems, and Node.js is a solid application of that pattern. But it certainly is more complicated than synchronous programming in the kinds of environments you listed.
I do wonder however what compels you to shit all over a thread celebrating a project milestone with an opinion that comes from an admitted lack of knowledge.
I'm not 'shitting all over Node'. I do use it for certain applications, and would love to use it more because it is a fantastic solution for some problems. As a back-end for websockets it's in a league of its own, and 'Isomorphic JavaScript' is very exciting if you ever need to hire developers.
I am using this discussion as a means of saying "here's what I think Node needs to get more people using it in production". My opinion is that they need to do more work on stability of the API, and improve documentation, to win over us enterprise types who value long-term stability over short-term features.
Just because I think you'll find this useful (and not because I want to denigrate Node, which I think is great tech, especially with the upgrades in this announcement), have you seen Phoenix[0]? Despite immaturity, I think it is already close to being in Node's league as a back-end for websockets, but seems to have a philosophy you may enjoy more. That is, it reminds me a lot more of Rails and Django than anything in Node. (I also think Elixir is awesome to work with, but that's somewhat beside the point.)
This is an announcement about the Node open source community finally overcoming their differences and making strides towards platform stability, yet your first response is to complain about "massive instability" (which turns out to be more about individual libraries than the platform itself) and how you "wouldn't hold Node up as a great example of the open source community". I can't see this as anything more than a slap in the face.
My opinion? You probably should withhold your judgement of the "open source community" if you only have superficial knowledge about their efforts.
>I think asynchronous programming in the reactor pattern is a good solution pattern for many types of problems, and Node.js is a solid application of that pattern.
Asynchronous programming a la Node (with callbacks etc) is an anti-pattern.
We've had better ways to handle that for 4 decades now.
> For the benefit of any who are not sure what you are talking about, could you please elaborate on what some of these "better ways to handle that" are?
While I am not certain what the GP specifically references, I can say that about 40 years ago the Actor model[1] was officially documented. The literature documenting benefits of an Actor model over a callback approach are easily found. For understanding the pain which callback-based systems often produce, a person can familiarize themselves with the Win32 API (disclaimer: this is a masochistic exercise not recommended for anyone).
Even if you don't go the actor route, if you're going to use continuation passing style as much as Node wants you to, having a language with better CPS support would be much nicer
People associate threads with shared-memory concurrency, which can be really hard to deal with, but shared heaps are not the important part - blocking is.
Blocking threads means that all your functions can call each other, all control flow constructs just work, and debugging is sane. With blocking threads functions don't need to belong to a special class of async functions that in general need to be called from other async functions.
It would have been great if node invested in allowing many concurrent blocking JS threads. To see what that could have looked like, see Dart's Fletch project: https://github.com/dart-lang/fletch
Generators are just iterators, and are synchronous. The consumer asks for a value with next() and the value must be computable (even if that value is a Promise).
If you have an async function (one that returns a Promise or in the future a Stream), and you need to call it from a sync function, and somehow incorporate the future value into the return value of the sync function, you're stuck. You have to async-ify your functions all the way up the call chain.
With blocking this isn't a problem. We just need environments that make blocking cheap.
In my experience in practice there is no big difference, really, its just a matter of a few keywords.
As for "having to change an entire call stack", that just stopped happening after a while. The rule of thumb was that if the function is impure (e.g. relying on mutable state) then it will likely become async, too. That lets me plan ahead.
> the massive instability has put me off from relying on it for anything core.
Documentation is quickly out of date, libraries seem to have incompatibilities which only become obvious when they don't work, there's no 'recommended' approach for even basic tasks.
--
Do you have examples? My counter to your anecdotal evidence is my own--we've successfully built and managed multiple projects in production, that service millions of customers, which were written in nodejs, and have been doing so since 2013. The issues with Io led to no discernible conflicts to our businesses.
I agree that it's possible to build successful Node apps, but of all the languages I use, it's the most difficult to keep up to date with and most likely to cause problems.
If you're reading the mailing lists, going to meetups, etc, these things might not seem like huge issues. But if you're just trying to run a small Node project as one amongst many bits of software in many languages, Node causes a disproportionate number of problems.
I've given a couple of examples below - Zombie switching to io.js, and the whole self-signed certificate debacle from a couple of years ago.
The Node ecosystem is a great example of the open source community. The large number of libraries released on npm, active participation of the community and frequent events around the world related to Node proves it. Besides of course the large list of companies using it.
Not sure about your problems with node and the 'massive instability'issues you've had, or if they are related to a third party library and not the Node core.
Geez, that's a bit harsh, don't you think? I've used node on several high-volume production apps, and I haven't seen any instability. And, if the documentation is poor, then it's only poor forsomuch as one unwilling to delve into the code.
Node has its place, and it's not for everything, but for quickly whipping together sturdy, flexible web APIs, it's great.
> I've used node on several high-volume production apps, and I haven't seen any instability.
Not sure if this is what you meant, but I read the ancestor comment not as being about "will the process crash randomly?" but "will an upgrade break something in my app?"
It's just that I've seen whole comment threads go by with people tripping over this misunderstanding without ever acknowledging it.
npm coupled with drastically and frequently changing libraries (and transitive dependency clashes) does make a hell of a dependency mess; I agree.
But, on the plus side, most node libraries are easy to modify. In fact, I have a private repository full of github forks that I've had to make over the years (and not had the time or patience to submit pull requests). Luckily, npm supports private repos easily.
Do you have an example of the out of date documentation? The Node.js built-in modules have remained largely unchanged (as far as my use goes) since around 0.6.0 if not earlier. Streams underwent some big changes but most heavily used modules were updated relatively quickly. This went far more smoothly than the Python 3 transition IMHO.
Documentation in Ruby is bad not because of the amount of resource available. But because the community's love on language's meta-programming.
Module are dynamically included. Methods are dynamically generated. The generation static document can only give a partial view of the running objects.
Out of curiosity, what about the Ruby documentation makes you say that? I personally rank Ruby's documentation (at least for the core language) as being pretty damn good (only a few I can think of are better), and node's core documentation is, in my opinion, a great example of how _not_ to write documentation. A personal favourite is how the first line of documentation for ClientRequest and IncomingMessage is about what creates them rather than in terms of what abstraction they represent[1]. Also, nothing in the documentation documents the types of things. In that regard, the Ruby docs are a bit weak (with the type documentation being a bit more informal than I'd like), but at least they put in the effort to document it. Also, I personally find the code samples in the Ruby docs to be much nicer than those in the Node docs (though YMMV).
That really smells like FUD. Sass is Ruby for example not node. Ruby has been much more unstable than node AFAIK. And python hasn't even been able to move to version 3 so not sure it's the best example. I really like both python and Ruby BTW. Haters gotta hate...
As a Node coder I used to avoid SASS since it was Ruby-based; now there's a fully-featured SASS parser in Node that keeps up with the specification. Bootstrap is moving from LESS to SASS, and they've always had their build step in Node.
What mission critical software do you write? Does your code review process require multiple meetings that go over a single line of code in excruciating detail?
Oh, I remember having to use kgcc on red hat to build a kernel because egcs couldn't... Recently worked on bits of ancient C++ that still require gcc 2.7 for that very reason.
I am so glad that gcc got its house in order. As a bonus the compiler got significantly better in pretty much all the ways it could (at least for users, no idea about internals.)
The more examples I see if open source in action -- that are similar to this one -- the more I start to realize what makes the `free market` special isn't the `market` part at all.
These interactions strongly mimic the interactions of the market. What's the true driving force?
Don't forget about bitcoin core and bitcoin-xt! People are screaming bloody murder about that, it will be very, very interesting to see how it plays out.
"Lets hope the spork gets spooned so that no projects get knifed. If not then I guess we have to hope for a knork so we don't get stuck with a couple of chopsticks."
A group of contributors got tired of Node not using semver when the rest of the ecosystem does (among other gripes). They forked it and shipped io.js v1.0.0. In keeping with semver, they bumped the major version number on every breaking change. According to iojs.org, the most recent version is v3.3.0.
Joyent and the io.js team reconciled and merged their codebases, maintaining everyone's versions. Since io.js had used versions 1 - 3, the merged version number is 4.
No api stability promises before 1.0 is technically part of semver. It sounds like io.js should have stayed below 1.0 if they broke backwards compatibility three times in less than a year!
I always found the naming-and-proclaiming of semver in the javascript community silly. Old style C libs have been doing what semver describes for decades, and while the js community talks it up like crazy, the compatibility and stability is terrible (and as a result they find the need for private recursive sub-dependencies, and the resulting thousands of unique libs/copies is impossible to audit or patch).
The Node library is massive, and tightly coupled to V8. The changes that bumped the major version weren't major changes for most people, just for those who happened to be using the parts of the library that changed. Maybe that's a deficiency in semver (or the way io.js used it), but going to v4 in one year isn't as unstable as it sounds; it's just following Chrome's example of not caring how big the version number gets, so long as the semantics make sense.
I believe the io.js community was in the "ship or get off the pot" camp, and found it silly that Node was a critical piece of infrastructure at firms like PayPal while still technically not having had a GM release. By forcing the issue, they got Joyent to commit to major version changes twice a year, which sounds like a reasonable compromise between everything-changing-all-the-time and we-can't-have-new-things-because-legacy.
It's a good argument for adding another level in addition to semver: A human-oriented release number.
After all, when a project goes from 2.0 to 3.0, you expect something big to have happened -- not just some breakage.
Some projects use release names (e.g., Ubuntu and Android), but if every project starts using release names we'll have a lot of (mostly silly) names to deal with on a daily basis. Plus, there's no inherent ordering in names.
>
... but going to v4 in one year isn't as unstable as it sounds; it's just following Chrome's example of not caring how big the version number gets, so long as the semantics make sense.
Except that Chrome is mainly an application for end users and not a library, so the implications of "breaking changes" are a lot less severe anyway.
In the parts where it does provide APIs (e.g. to websites or add-ons), the compatibility policies are a lot stricter than semver would require.
python also has a pretty big standard library. I don't have much specific experience before python 2.5, but I do know that code written for python 2.5 (released in 2006) is just about 100% compatible with python 2.7 which still has support and patch releases. Backwards compatibility was broken once in the last decade, for 3.x.
Or consider gtk+. 2.x was backwards ABI (binary!) compatible for over a decade. Compatibility was broken once in the last decade, for 3.x (for applications. for themes and windowing environments is a different story unfortunately.) So programs written for 2.10 or so still work today with (still continuing) recent releases of 2.x.
Joyent were incompetent stewards of the node project, so it was forked into "iojs". The iojs project adopted the semantic versioning system, where a new major version number indicates backwards-compatible changes (which mainly happen when they update the version of V8).
Eventually Joyent recognised they were on the losing side, and so they agreed to merge back together, under a new foundation which was set up with the help of the Linux Foundation.
Node.js v4 is the first release from the newly merged projects and they used v4 as iojs had already made it to v3.
NodeJS was forked into io.js, which continued ahead onto 2.0 and 3.0. Now the two projects are merging back and they are continuing with io.js version numbering rather than Node's, starting with version 4.0 today.
Guys, does V8 still deoptimize on ES6 features? For example, would using say const/lets in a function prevent V8 from optimizing it as a whole? That was the case some time ago when these features were still under a flag.
Yeah, this is rad, as I imagine there's some overhead to running the transpiled babel ES6 code compared to running the native ES6 features. At least some things may be faster - I remember lexical block scoping (let and fat arrow) causing perf issues in traceur.
Now I just need to be able to rely on these features being present in the browser so I can just write native ES6 everywhere and skip that build step entirely.
We've been using a concept of a "esnextguardian" to be what our package.json main points to, it then tries and loads the es6+ version and if that fails, fallbacks to the compiled es5 version. It's been working quite well for all our different Bevry projects. More info: https://github.com/bevry/base#esnextguardian
ES6 support is nice, especially if there is no performance hits for using ES6 features. I slightly prefer Typescript, but the extra language features in ES6 really make JavaScript development more fun for me. Thanks to the newly re-combined Node team!
Good news for you: ES2016 will support optional type annotations, clearly inspired by TypeScript. The idea is we shouldn't need a damn transpiler just to get the features most people want out of a language...
Interesting, can you give a link? I tried to find it, it's not here https://github.com/tc39/ecma262 Only mention is "Typed objects" but that is not TS-like annotation.
As I understand it, that still involves importing the whole file, rather than just the part you need. If that is an issue, lodash is parted out by method, so you can do this instead:
Just a nit, but that isn't destructuring. Named import syntax has several differences, since name aliasing uses a different format and you cannot nest or provide default values.
Adding support for `import` will be a huge breaking change unless all existing packages have to opt-into it somehow. I don't think anybody has a good plan for this yet, apart from treating it as syntactic sugar for CommonJS.
The only problem are "default exports", i.e. overriding `module.exports` directly. In Node the exported names are just properties of the `module.exports` object but in ES6 the named exports are entirely separate from the default export. IIRC module.exports was not actually part of CommonJS although nowadays most people just use that word to mean "whatever Node does".
ES6 imports are declarative, so dependencies are resolved and executed before the body of the module is executed. That means you don't conceptually have to think about whether your dependencies are loading synchronously or asynchronously.
If you're building isomorphic code for the browser using webpack, you can quite happily chunk certain modules into separate bundles which will be loaded asynchronously when needed. It's pretty awesome.
This is crazy fast shipment..almost scary that you have to start keeping up with all the changes..
For anyone who wants to upgrade either from node 0.12 or iojs 3.30..all you have to do is re-install it with the GUI installer and you're good to go..for your personal computer at least!
nvm maintainer here: the only thing that's done over HTTP is the initial redirect on http://nvm.sh, which will soon be HTTPS. This URL is only used if you type it in (because it's easy to remember).
Everything else is done over Github's super-secure HTTPS - so there's really nothing to be paranoid about.
nvm maintainer here: you do not need the `v`, it will always be automatically added if you've provided a version number that doesn't already match to an existing alias.
nvm maintainer here: this is true - but please use `node`, since the concept of "stable" and "unstable" has died with the advent of node v4.0 using semver.
All node versions are now stable, and version numbers now communicate breakage, not stability. As it should be.
Does anyone have a good source of information about the major changes between Node and IO? I haven't kept up with the community recently and I'm curious as to what the delta really is and what merging IO back into Node will mean practically.
the iojs site mentioned nothing about the merge as I checked yesterday, neither does nodejs site, which is a bit odd.
the new version number is using iojs instead of nodejs's existing version scheme, which is interesting too.
I just began a php device-configuration-management project and was strongly persuaded by a senior php developer that I should use nodejs instead, as he thinks nodejs _is_ the future and many big guns are using it for real deployment(netflix, linkedin,paypal...), just in time to try the fresh nodejs release for the new project.
> the new version number is using iojs instead of nodejs's existing version scheme, which is interesting too.
Well, its a follow-on to both pre-merge iojs and pre-merge nodejs, and has breaking changes to both, so the most SEMVER consistent version number is the first major version greater than the greater of pre-merge iojs's last version number and pre-merge node's last version number -- which is exactly what they chose.
No software is bug free, but AFAIK Node v4 was very thoroughly tested in a vast array of platforms and configurations, which provides certain degree of certainty, when talking about stability.
At any rate, if you ran into issues with 0.12, the best you can do is give 4.0 a go and report issues!
If you're concerned with stability, the best thing you can do is wait for the LTS release to come off the 4 branch.
A quick glance at outstanding issues will show a number of integration bugs are still outstanding, and the new v8 was landed only a few days previously - incompatibilities with new versions of v8 can take a while to surface in node.
It's important to remember that semver makes no promises about stability. While we're used to "N.0" meaning "stable and ready for upgrade," that's a non-semver idea that is explicitly disregarded in the semver release model. Node moving from 3.* to 4.0.0 only indicates that there are breaking changes(1).
If you have concerns about stability, you should take signals on that from the node LTS group, and pick LTS releases.
1 - whether or not this is good, bad, or merely a different permutation of the things that are turning your hair grey, I leave as an exercise to the reader.
Just curious..where are you tracking these outstanding issues?
In GitHub issues (for nodejs/node), I can see the 5.0.0 milestone has 5 open/1 closed issue; but my understanding is that the LTS release will be cut from 4.x; and 5.x is for rapid iteration post-LTS (i.e. changes that would have previously gone to io.js).
I recall reading that the 4.x LTS release is planned for a couple of weeks after 4.0.0 (which should be around the end of September, or early October).
I'm interested in tracking progress in the lead up to LTS, as this is when the node Homebrew formula is expected to bump from 0.12.x -> 4.x.
Node's API stability is a wonderful and many-layered thing. While more and more of the API is moving towards stability in the colloquial "is this going to change between versions?" sense, it's still a mixture.
Those are classifications, but are they definitions? It's still not clear to me that "stability" has much to do with the software performing in a consistent matter. Either way it's a careless choice of wording, I've never encountered anybody who saw the word "unstable" attached to a node.js release and understood that it might refer to the API (see surrounding comments).
PHP stalled in pre-6.0 land and then jumped straight to 7.0 because the release was just not going to happen. The closest thing in Node I can think of is ES4 (which failed, resulting in a jump to 5 and ActionScript diverging further).
Node stalled in pre-1.0 land with what was effectively a feature freeze before io.js split off and jumped to 1.0. Io then went on a regular release schedule strictly following semver leading to 2.0 and then 3.0. These aren't backwards incompatible in the sense that PHP 5 is to 4 or Python 3 is to 2 -- most code will likely still work; they mostly propagated breaking changes caused by updates to the underlying V8 engine, breaking some native extensions. The "jump" from 0.12 to 4.0 for Node is because instead of merging io.js back into Node, Node 0.12 was merged into io.js and io.js became the new Node 4.
Why out of scope? It's a DB api that has no browser/ui dependencies.
I see that the levelDB implementation you pointed to can use a backend that uses IndexedDB. So theoretically this levelDB Api could provide an Api that work across both browser and server.
But argh. IndexedDB is a standardized api. It has a usable open source implementation in WebKit. Why not just go with that?? Why create a different API that does the same thing? I've been building an entire app around IndexedDB, but now I have to port it to a different Api to run on a server? Why?
If you'd used leveldb from the beginning you wouldn't have this problem as it will happily work on the server and in the browser https://www.npmjs.com/package/level-js
Yes I see that. But if IndexedDB were supported on Node I also would not have this problem. So the question is, why can't node just support IndexedDB (which is formally specified in a standards document) instead of inventing an extremely similar but incompatible api?
Because IndexedDB is a W3C spec[0] intended for web browsers and LevelDB is a third-party npm module[1].
Node is just a JS environment. Implementing IndexedDB is as much out of scope as implementing XHR[2] or the File API[3]. In fact it provides the building blocks developers to implement any of these on top of Node should they need to (like node-fetch[4] implementing the Fetch API[5] for isomorphic apps).
This just begs the question. The question is, what about IndexedDB, XHR, the File API, etc. make them unsuitable for pure-JS environments?
Because there is nothing about the problem domains (indexed key/value store, asynchronous web requests, filesystem I/O) that are specific to web browsers.
If a problem domain is common across browsers and pure-JS environments, then it should follow that there can be common APIs. If some part of the API is necessarily specific to one or the other, then ideally these differences should be localized to small parts of the API.
That's an intuitive assumption but it's naive (i.e. it lacks understanding of the actual domain concerns).
JS in the browser needs to be sandboxed by default and has to handle concerns like cross-origin policies and interactively seeking user permissions. It also has some fairly browser-specific singletons (e.g. a shared global cookie storage).
The equivalent built-in node APIs are much more low-level, allowing developers to use abstractions that are useful in their problem domain.
Having an IndexedDB implementation in node core would be an incredibly pointless effort (most apps out there won't use it) and bring with it several complications (e.g. pluggable storage backends and concurrency conflicts if you want multiple node processes to share the same database). Plus it would mean the Node Foundation would have to get involved in the standardization process to make its concerns heard and likely introduces concerns that are irrelevant to everyone else (i.e. browser vendors).
Don't forget that Node is not a web framework. It's a JS runtime environment. It is primarily used for things that talk over the web or that generate content for the web, but it's not at all unreasonable to implement other things in it (e.g. mail servers). The web specs carry a lot of overhead that is simply unnecessary for most node applications even if it is perfectly necessary in browsers.
The only spec I can think of that I'd like to see in node is the Fetch API and for that we have node-fetch, which just wraps node's low-level http module.
What I'm trying to say is that node doesn't need these high-level APIs because it can give you the low-level APIs to implement them with. Browsers can't do this, so they need to work at an entirely different layer of abstraction. Plus node allows you to easily include native extensions whereas in the browser you can't have that (except for NaCl).
I agree with you that it'd be nice, but I don't think there are any plans for it.
I wrote https://github.com/dumbmatter/fakeIndexedDB which does run in Node (albeit really slowly and only in memory). It wouldn't be that much work to make it run on a real DB backend (LevelDB like Chrome, SQLite like Firefox, etc), at which point it would be everything you want. PRs are welcome :)
Technically Node 4 is io.js 4 after Node 0.12 was merged into io.js 3.
So it's really 0.11.x -> 1.x -> 2.x -> 3.x -> 4.0.0. Except that 1.x, 2.x and 3.x weren't called "Node" at the time because Joyent owns the trademark and went on to release their own 0.x release(s) until the merge happened.
The situation went from bad, to worse, to the best possible outcome, and that's remarkable to say the least.
Congratulations to everyone involved and thank you for the hard work.