Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Node v4.0.0 (nodejs.org)
1248 points by mmebane on Sept 8, 2015 | hide | past | favorite | 267 comments


The io.js fork and subsequent merge back into Node.js, including the birth of the Node Foundation, has been one of the best examples of the power of open source I have ever seen in action.

The situation went from bad, to worse, to the best possible outcome, and that's remarkable to say the least.

Congratulations to everyone involved and thank you for the hard work.


As somebody who uses Node only at the periphery of my role (for things like the Zombie browser, and SASS parsing) the massive instability has put me off from relying on it for anything core.

Documentation is quickly out of date, libraries seem to have incompatibilities which only become obvious when they don't work, there's no 'recommended' approach for even basic tasks.

I won't pretend I'm speaking for everyone, as clearly there are lots of people doing great things with Node. However, it doesn't seem to be a language like Ruby or Python where I would feel comfortable using it for a small but important project.

I'm always grateful to everyone who works on open source projects, but I wouldn't hold Node up as a great example of the open source community.


Instability is the very thing this merge and the foundation seek to address. Where there was once an uncertain future, there is now a Board of Directors and a Technical Steering Committee bound by a Open Governance Model.

The future has never looked more stable for Node, just take a peek at the names of the companies sitting on the board: https://nodejs.org/en/blog/announcements/foundation-elects-b...


It promises a stable future, so let's say I'm being cautiously optimistic.


> It promises a stable future, so let's say I'm being cautiously optimistic.

This doesn't make sense in light of your initial comment. One of the big goals of IO was to move to semver and address unpredictability/instability that you describe.

If anything, the merging of Node and IO should give you confidence that project governance will be on the right track moving forward.

Also, when you consider the advancements in transpiled js, the js/node community is looking very wise compared with other language maintainers.


"On the right track" does not mean "production ready now". Some time needs to pass in order to create the confidence necessary.


Tell that to the big companies who use it in production right now. Microsoft, Netflix, PayPal, Walmart.


I suspect he means one-man-set-and-forget-production.


I lot of people (myself included) have been using IO.js in production for a while now.


> One of the big goals of IO was to move to semver

And of course they failed right from the start. From semver.org: Version 1.0.0 defines the public API.

So their jumping to 4.0.0 from 0.x.x means they don't care to have a public API!


I'm not sure you're not trolling, but in case you have serious concerns, node.js 4.0.0 incorporates the history of io.js. io.js defined 1.0.0 in early 2015.

See https://github.com/nodejs/node/blob/master/CHANGELOG.md and search for 1.0.0


I mean, Ruby was created in 1995, Python in 1991. Node was first released in 2009. That's not an entirely fair comparison since, of course, JavaScript was created long before Node, but Node introduced the language to a new environment, and a lot of complications needed (and in some cases, still need) to be sorted out.

As for no recommend approach - that's deliberate. I don't think most people working on Node want a Rails equivalent.


> As for no recommend approach - that's deliberate. I don't think most people working on Node want a Rails equivalent

This is a useful way to understand what Node is. Node's core API is for building any sort of network service: A DNS server, an SMTP server, HTTP server, socket listener, etc. and so it applies to a wide and varied set of problems. Rails is for building a very specific type of database-backed web application.


Exactly. I don't think Node is a good fit for monolithic CRUD apps. These are already solved problems in various other languages. Instead Node lets you compose small purpose-built modules.


Now out only problem is how to build decoupled/distributed micro services.

I'd argue only Erlang and Elixir are going to survive into the multi core multi server future, but that would be little more than a slightly educated opinion unless I take the time to write a very large explanation with cited references. I don't have time for that this week.


Personally I prefer having a monolithic app at the core and using Node as the glue to tie everything together (e.g. to provide "real-time" features or to integrate third party services).

I think what's holding Erlang back is mostly its rather obscure syntax. Only a mother could love that.


We are an (almost) completely node.js team and are having quite a bit of success using it for microservices. gRPC makes communication fast and easy, while Docker and Kubernetes make deployment and composition just as simple.


I agree - I think I could get away with arguing that Ruby became popular because of Rails (that may not be completely true, but I and a lot of other devs I know learned about ruby because of rails at least). Node, OTOH, has no central framework that people learn of and then say, "oh, I have to learn node to use XYZ".

There are plenty of libraries that provide whatever level of support you want, but one big reason people seem to use Node is for the ease of the DIY process. Honestly, I don't need an all-in-one system, because my system has needs that aren't easily modeled in ActiveRecord (or at least weren't during Rails 3 days when I moved over to Node). Instead, I am doing mix-and-match with smaller libraries to help database access, realtime (websocket) services, OAUTH, and a trad REST backend.

I also think it's fair to compare the release of node and the release date of ruby or python - it was really only then that we can start talking about needing frameworks for building web applications that run in the javascript language, whereas the other two languages could have done so right away (or ... later, once the web application world became more than just a cgi gateway on top of a c++ stack, like the lunch menu randomizer I had running back in 1995). Still, no matter how you slice it, node and server-side javascript are still relatively new on the scene, and not necessarily well-suited for an all-in-wonder, highly-opinionated framework.


> ... Ruby was created in 1995, Python in 1991. Node was first released in 2009. That's not an entirely fair comparison since, of course, JavaScript was created long before Node, but Node introduced the language to a new environment...

Actually, no, it didn't. JavaScript was first introduced "on the server side" about 20 years ago in the Netscape web server[1].

  We learn from history that we learn nothing
  from history.

  George Bernard Shaw[2]
1 - http://docs.oracle.com/cd/E19957-01/816-6411-10/getstart.htm

2 - http://www.wisdomquotes.com/quote/george-bernard-shaw-19.htm...


The JS of 2009 was an entirely different language from the JS Netscape originally created. Plus Netscape's web server was an evolutionary dead end and a proprietary closed-source product.

Node was in part influenced by CommonJS, which tried to unify the various competing but relatively obscure (mostly non-browser) JS environments. To say that it learned nothing from the history of SSJS (or JS environments in general) is, frankly, wrong.

That said, Netscape's SSJS is entirely unrelated to Node or anything like it[0]. Node lets you write application servers, Netscape's SSJS effectively was more like Rhino-meets-JSP or maybe GWT (i.e. the heavy lifting was apparently implemented in Java, not JS).

[0]: http://docs.oracle.com/cd/E19957-01/816-6411-10/jsserv.htm


With utmost respect, I respond to each point you mention below.

  The JS of 2009 was an entirely different language
  from the JS Netscape originally created. Plus
  Netscape's web server was an evolutionary dead end
  and a proprietary closed-source product.
The reason I cited Netscape's server was to unequivocally show that NodeJS did not introduce JavaScript to "a new environment" as the GP stated. Why previous efforts failed to gain traction or if it is even a good idea to use JavaScript to define server-side logic is left as an exercise to the reader.

  Node was in part influenced by CommonJS, which
  tried to unify the various competing but
  relatively obscure (mostly non-browser) JS
  environments. To say that it learned nothing
  from the history of SSJS (or JS environments
  in general) is, frankly, wrong.
It would seem we have differing perspectives of history. CommonJS was started in 2009[1], the same year as NodeJS[2]. The lessons of history I implied regarded attempts to use JavaScript as a server-side solution over the past 20-ish years. While you make the distinction:

  Node lets you write application servers,
  Netscape's SSJS effectively was more like
  Rhino-meets-JSP or maybe GWT (i.e. the heavy
  lifting was apparently implemented in Java,
  not JS).
I suggest that this is irrelevant, as the details of how the JavaScript logic is executed is orthogonal from the fact that systems such as these use JavaScript to define behaviour itself. The fact remains that there have been more than one previous project/offering/effort which had JavaScript as a key component. IMHO, it behooves interested parties to do whatever postmortem examinations possible before committing to the same or significantly similar path.

1 - https://en.wikipedia.org/wiki/CommonJS

2 - https://en.wikipedia.org/wiki/Node.js


I don't think how Netscape's SSJS worked is irrelevant at all. There is a huge difference[0] between Node as a JS environment for writing applications and Netscape's SSJS as an environment for writing "server pages".

Node is for writing small standalone services. Netscape's SSJS was for creating dynamic web pages -- just like CGI scripts or (at the time) PHP.

SSJS may be considered prior art for "JS on the server" but it's barely recognizable. As far as I can tell the execution was entirely synchronous; but Node is defined by its evented -- and therefore asynchronous -- I/O. It's a data point for comparison but it's not a good lesson to learn from, other than what not to do.

A lot of the shortcomings of Node come from the limitations of JS at the time. Callbacks and streams (arguably two of the biggest issues with Node) are a result of Node being asynchronous. At the time, event listeners and callbacks were the best tool JS offered and the best thing the JS community had.

I can only imagine you're trying to say that Node should have taken more inspiration from other languages that solve problems better (like gradual typing, or the Maybe monad or the actor model). But none of this has anything to do with Netscape's SSJS.

You explicitly point out Netscape's SSJS as a part of history Node should have learned from. What specifically do you think Node should have learned from it? Are you just trying to say "SSJS is a bad idea (because Netscape's SSJS was bad) and shouldn't have been attempted again, even in an entirely different way"?

[0]: http://stackoverflow.com/questions/18350910/netscape-enterpr...


Sure, and after that we had Rhino and Adobe has been using JavaScript for a very long time as a scripting language (ExtendScript) and there are probably many other examples of JavaScript outside of a browser. But I don't see what that has to do with untog's main point.


I saw "untog"'s post as having two points, to which I responded to the former. You identify the latter point as being the main one and that's perfectly fine.

Just different importance discerned from the same post IMHO.


How long have you been waiting to use this pedantic comment and the obnoxious condescending quote about it to prove someone wrong and make yourself feel smart?

This has nothing to do with the parent comment about node deliberately differing from rails, it's typical hn/reddit pedantry.


Living up to his username.


I believe that Node can and will become more stable. It feels like it has too much momentum not to. But until it does achieve stability, it's difficult to use it for anything serious without investing a lot of time into it.

Perhaps Go is a better comparison than Ruby and Python.


in fact, to my surprise, I've just tested our app (v0.10.x in production) on v4.0.0 and all our unit tests pass, the server works...

I'm amazed. A single module had to be bumped (Elasticsearch@8) while we directly rely on 50+ third party modules... Our npm-shrinkwrap is 2600 lines long.

That's not really a sign of instability to me, but rather great work


Most of the instability is related to compiled modules. Hopefully they'll be updated to support v4.0.0 shortly.


mongodb and redis are two compiled modules and we didn't have problems with those. Honestly, there may be others, I don't even know!

We try hard to keep our dependencies up to date to limit the risks of an upgrade. We progressively apply dependencies updates to dev->staging->sandbox->production environments.

Was I worried when node.js forked? sure! but the situation is much better now. I think node.js can move forward smoothly now.


You seemingly have much more experience with Node than me. Perhaps my amateur anecdotes are just bad luck, but certainly my Bash scripts that talk to Node break frequently.


Bash scripts talking to Node sounds like a recipe for disaster.


Why?


Please explain.


I think the npm project should document best practices, like when to use "*" version, when to use npm-shrinkwrap, etc... to limit problems.

NPM is a very powerful tool. In fact, it's our deploy tool: we run `npm install` on servers (private Sinopia npm repository) to deploy. But to do that, you must follow many many rules that are written nowhere.


We used to do this and it caused a lot of issues. Shrinkwrap is a must to keep versions stable. NPM and network unreliability can cause failed installs, as well as other issues.

Our first major improvement to this work flow was a tar of the app with all of the dependencies coupled with `npm rebuild` after unpacking. This worked quite well.

However, recently, we have switched to using Docker to generated this sealed packages which is working great.


We've never really had a problem... Our sinopia npm server is probably isolating some network problems, plus we don't start/update servers that often.

Our NPM workflow also ensures builds are easy to replicate. Most problems in development occur because of obsolete versions of some packages (our apps contain many private modules)... Clean npm-installs shields us from that in production.


it's difficult to use it for anything serious without investing a lot of time into it.

Isn't that true for many platforms?

Either way, I have multiple apps running Node in production, and I've never had the issues described. You don't have to be as breakneck as Node itself - some things I have are still running v0.10x just fine.


I use Java/Spring, Ruby, and Node.js in production, and Node.js is by far the hardest to stay up to date with without introducing major problems. The only framework I have used that is harder to update is Finagle.

It's not just Node itself to blame. There seems to be a culture of breaking things in the Node.js ecosystem. Breaking changes in third-party libraries are common. I have to specify exact dependency versions in my package.json, and almost every time I update one of them, something goes wrong.


In my experience, Ruby (particularly rubygems) is quite a pain in the ass with regard to package upgrades, especially when the previous developer/ops has leveraged rvm to install multiple combinations of ruby/gem versions. I can't tell you how many times I've had to trace down problems in a project because a single gem fails to upgrade/pull/compile/resolve on 2 out of 4 machines even though the project has an identical gemfile and available ruby version on every server.

I contrast this with node/npm where my projects seem to behave identically across deployments (and in some cases, when in a bind, I've even been able to simply tar & scp the project src and have it run without further effort beyond installing node, something I've never been able to do with ruby).


Identical Gemfile.lock and same OS? Then you shouldn't be having any problems. What kind of errors were you getting?


have to specify exact dependency versions in my package.json, and almost every time I update one of them, something goes wrong.

That's where semantic versioning comes in. Most libraries follow it - if the major version number changes, it's a breaking change. If it doesn't, you're safe to upgrade.


Semantic version doesn't solve compatibility problems, it just makes it more transparent. In some ways semver encourages breaking changes that otherwise might be avoided because, how, a major version bump gives you sufficient warning.

There are some major libraries that are on version 15 or greater... there's nothing good from a user's perspective about having a library they use do a major breaking release every other month.

What's worse is many libraries take advantage of "it's the wild west until 1.0" rule in semver and simple never release a 1.0.


> Isn't that true for many platforms?

I would say that the ROI is lower for time spent learning Node. I've found that with all the other languages I've learnt, in a couple of days I've been able to build something useful, leave it running in production and reuse skills learned a year later.

With Node, even the name of the language is not stable! A few weeks back I went to install Zombie, and now I have to install iojs and not Node. Assumedly it's now Node again. If you live and breathe the language, subscribe to the mailing lists, go to the meetups, etc, these things might be obvious. But when you're just dipping in and out, it really dents your confidence in the language.


Your latter paragraph seems a bit fuddy as people 'just dipping in and out' would never have to have contended with the iojs 'name' any more than Python/Ruby/PHP people have to switch to a different VM every time one is announced (e.g. PyPy/Rubinius/Hack).

I'd agree with your first paragraph in that Node is a bit lower-level out of the box and probably takes more effort to get started building a web-app, but I think the time invested jumping in at that low-level is actually useful vs diving in at a higher 'framework level' with some of the other popular web languages.

Also it does have superior installer/package-manager/deployment stories compared to say ... Python & Django, which does have an impact on ROI.


> Your latter paragraph seems a bit fuddy as people 'just dipping in and out' would never have to have contended with the iojs 'name' any more than Python/Ruby/PHP people have to switch to a different VM every time one is announced (e.g. PyPy/Rubinius/Hack).

One of my main use cases for Node is Zombie, and that required io.js after v3. No mainstream PHP, Ruby or Python application has required a different interpreter.


The io.js situation was more akin to Python 3 vs Python 2 than Python vs PyPy. It might not have looked like a major difference at first glance, but there were notable significant, which came with some breaking under-the-hood changes.


Sounds to me like your issue is with the author of Zombie.


Actually it's with jsdom, Zombie's major dependency which dropped Node support pretty soon after IOjs was released.


That sounds political, more than anything else.


Actually, IIRC it's the exact same reason as the io.js split. The maintainer of jsdom had changes that weren't being merged into node's VM module.

So jsdom 3.x used a lib called "contextify". When io.js was released and the changes included, node support was dropped. Now node 4.0.0 is out and has those changes, jsdom now works on node again.


It was. The maintainer of jsdom decided to no longer support node after 3.x and made the library throw a condescending exception if you tried to run it with node. The docs were also filled with childish "tm" symbols whenever node was mentioned.

I see things like this all the time and it really puts me off OSS.


> I don't think most people working on Node want a Rails equivalent

It wouldn't really be possible anyway. Async programming is hard and the shortcuts one can take with synchronous Ruby are impossible with Async Javascript. Nodejs has a "fibers" lib though , but it doesn't look like it is widely popular, but AFAIK it is the only to truly abstract async programming.


Try bluebird's Promise.coroutine, I use it everywhere and it works great.

https://github.com/petkaantonov/bluebird/blob/master/API.md#...


Or if you want to use native promises, I threw this together

https://github.com/Dashron/roads-coroutine


ECMAScript 7 has brought true async/await into the language, which currently works in Babel on top of Promises. As far as a regular user is concerned, define a function as "async" and then "await" on the result you're after inside of it. Hardly difficult, in my opinion!


async/await is nice syntax, but it doesn't change the fundamental difference between synchronous and asynchronous code and the viral nature of async functions. Sync functions cannot properly compose async functions in general.

A great essay on this is "What Color is Your Function": http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


Using abstractions like http://koajs.com/ and https://github.com/tj/co make Node code feel like every other language.

    var user;
    try {
      user = yield db.insertUser('foo')
    } catch(err) {
      if (err.code === '23505') {
        this.flash['error'] = 'Username taken';
        this.redirect('/register');
        return;
      }
      throw err;
    }


no because you can't write a library which apis are thunks. And your code needs to run in a generator, something you omitted. Fibers are a better alternative , no need for yield and generators everywhere.


Well, I was earnestly sharing run-of-the-mill Koa code since you made async abstractions sound elusive. Didn't realize you just had an agenda.

The thing is that people are actually using Koa/co to solve this problem. I've yet to see someone use fibers as their core webapp control flow abstraction.


What agenda do I have ? I have none, I'm just saying using generators do not abstract async calls. The only way to do that is by using fibers. I'm not pushing for anything, i'm just saying that something like rails couldn't be written in nodejs , (and no sails , is nothing like rails in terms of complexity ).


It's your fiber agenda. My mother used it to try to make me eat my brussels sprouts and I thwarted that, too.


With async functions you interface with everything using promises, it's great.


There is http://sailsjs.org/ for people who want a Rails equivalent.


I think there are two separate things in your comment: the recent instability is due to the leadership power struggle that has been resolved, but the "no recommended approach" problem is more the result of a philosophy that encourages tiny pieces, new ideas, and reinvention, which is both a blessing and a curse, and seems unlikely to change.


You can have both. An 'officially recommended approach' does not preclude 1000 unofficial but working approaches. However, it does it make it a lot easier for someone dipping their toes in.

With Node for core and basic problems, I find myself thinking "Should I follow this blog post written by a highly respected developer but which is a year old and therefore forever in Node-land, or this blog post written last week by a nobody?". I don't like thinking that way about production code I'm writing for clients, so I use languages where I don't have to think that way.


When you say languages, do you really mean frameworks? It seems like languages are fairly general purpose and don't go out of their way to tell you how to build your application. Node is more in the "language" camp than the "framework" camp, and there are bunch of frameworks on top of it that give you the structure you're looking for.


Anything. I'm a pragmatic developer who just wants solutions that help me pay the bills. It's precisely for that reason that Node has been a problem for me - it's caused much more than its fair share of headaches, when compared to Ruby, Python, PHP, Bash, Go.

Perhaps using a framework would make things easier, but I'm reticent to rely on it in the main part of the project when it causes so many problems just in the small parts.


Can you give an actual example of the 'so many problems' you describe?

Your previous comments address the paradox-of-choice idea, but the uncontroversial path in Node is to 'just use Express' right? How it that a worse situation than in other languages where there are also trendy frameworks competing against the default option. You name PHP as a better situation but from what I gather PHP is even worse at casting off legacy frameworks. Laravel is the new trendy, whereas it was CodeIgniter/Symphony before that and EE/Wordpress before that. And in Ruby, there's similar competition over an alternative to Rails [0].

[0] https://blog.engineyard.com/2015/life-beyond-rails-brief-loo...


Sure. There was the decision a couple of years back to stop using self signed certificates. I spent a couple of hours wondering why a server I was running that compiled SASS on deployment suddenly stopped working, and it turned out to have been this.

A more recent example comes from Zombie, which I use for functional testing. When they moved to v3, they required io.js instead of Node. So eventually I decided to go with Zombie 3 and moved to io.js, but then another library broke which required Node instead of io.js.

They're the two big ones that stand out, but there have been lots of other small annoyances.


I don't have a dog in this fight but the certificate thing sounds a lot like they corrected a significant security issue rather than API instability.


True enough, but they didn't tell anyone they'd done it for hours after they pushed out the changes. A notice on the terminal after the update should have been possible, or at least a big message on the homepage. But nothing - only an angry GitHub issues queue with random suggested causes and fixes.


In our shop we actually use all of Ruby, Node, Go, Python and Bash (in about that order of code volume). Honestly, for instability of 3rd party libraries Node is pretty bad, but Ruby is not really far behind. I am sure that I have spent more time debugging other people's code in Ruby than in Javascript.

Having said that the IO split has been a royal PITA. We got pegged to Node 10.x because JSDom only supports IO and Jest (which is what we wanted JSDom for) only supports Node :-P. I've been waiting a year for it to get cleared up.

It sounds like we made the right choice, though, because in this thread the people who are complaining about Node seem to be the ones who tried switching back and forth between IO and Node. There is actually only one bug in Node that has impacted me (related to the way connections are shut down) and it isn't fixed yet in master so not upgrading hasn't really impacted us. We just have to live without some of the ES6 features.

Personally, I'm a big fan of Go (old C++ programmer). We started using it a little over 2 years ago. The language definition has changed in that time, so I wouldn't call it stable ;-) The core libraries are rock solid, though. We barely use any 3rd party libraries (I think we use 2) so it's not really fair to compare that with other languages.

Although we have a fair amount of python in our shop, I don't seem to work on those projects very much (a month here and there). I can't say too much about it other than our Python code breaks due to 3rd party libraries more often than any of our other code. However, I think this is probably due to injudicious choices for our 3rd party libraries ;-)

Overall, I would say that my experiences don't match yours, but I wonder if it's because we never tried moving to IO.


Software development is not about chasing the latest shiny fad or trend on the market, it's about solving real-world problems by researching available solutions. So, if you think that Node doesn't fit or not equipped enough to solve your problems, you're welcome to check other solutions till you find the one most suitable to the problem at hand.

So, if you think that Ruby or Python could get things done for you and you're very familiar with them, go for either and get it done but to pick on Node just because you don't quite grasp its inner workings is really odd of you and the whole reasoning behind your critique is rather absurd.


It's not absurd to say "I'd like to use Node because of its performance and because it's easier to hire for, but only if it were more stable and had better documentation". As somebody who has to consider trade-offs between short-term and long-term costs for clients, these are very real problems for me.


Node's documentation really is terrible. The fact that it doesn't show what arguments functions take, what types are expected, etc. is pretty sad in 2015. The documentation is barely better than your typical side-project readme.md file, at best you get 1 example of an api's usage (always the most basic use-case only, of course).


Preach. I inherited a Node project for which Rails was a much better solution for our use case. It's been fun learning a new platform, but for our needs Rails would have been fine. The only conceivable reason I can see they chose Node is because it's cool.


Doesn't preclude them, but doesn't encourage them, which was the philosophy. An officially blessed approach would deprive alternatives of oxygen (though to be fair, it might also enlarge the pie, by attracting extra developers like yourself).

The flexible experimental approach inevitably conflicts with the constraint of back-compatibility (e.g. of x86, java).

Node seems more like the Drosophilia-like JS frameworks.


I can't reply to the grand-child of this post, so here's my reply.

It seems that you don't like Node and are intent not to use it. I respect that and am not trying to push it. I think asynchronous programming in the reactor pattern is a good solution pattern for many types of problems, and Node.js is a solid application of that pattern. But it certainly is more complicated than synchronous programming in the kinds of environments you listed.

I do wonder however what compels you to shit all over a thread celebrating a project milestone with an opinion that comes from an admitted lack of knowledge.


I'm not 'shitting all over Node'. I do use it for certain applications, and would love to use it more because it is a fantastic solution for some problems. As a back-end for websockets it's in a league of its own, and 'Isomorphic JavaScript' is very exciting if you ever need to hire developers.

I am using this discussion as a means of saying "here's what I think Node needs to get more people using it in production". My opinion is that they need to do more work on stability of the API, and improve documentation, to win over us enterprise types who value long-term stability over short-term features.


Just because I think you'll find this useful (and not because I want to denigrate Node, which I think is great tech, especially with the upgrades in this announcement), have you seen Phoenix[0]? Despite immaturity, I think it is already close to being in Node's league as a back-end for websockets, but seems to have a philosophy you may enjoy more. That is, it reminds me a lot more of Rails and Django than anything in Node. (I also think Elixir is awesome to work with, but that's somewhat beside the point.)

[0]: http://www.phoenixframework.org/


This is an announcement about the Node open source community finally overcoming their differences and making strides towards platform stability, yet your first response is to complain about "massive instability" (which turns out to be more about individual libraries than the platform itself) and how you "wouldn't hold Node up as a great example of the open source community". I can't see this as anything more than a slap in the face.

My opinion? You probably should withhold your judgement of the "open source community" if you only have superficial knowledge about their efforts.


>I think asynchronous programming in the reactor pattern is a good solution pattern for many types of problems, and Node.js is a solid application of that pattern.

Asynchronous programming a la Node (with callbacks etc) is an anti-pattern.

We've had better ways to handle that for 4 decades now.


For the benefit of any who are not sure what you are talking about, could you please elaborate on what some of these "better ways to handle that" are?

It would make your comment much more helpful, IMHO.


> For the benefit of any who are not sure what you are talking about, could you please elaborate on what some of these "better ways to handle that" are?

While I am not certain what the GP specifically references, I can say that about 40 years ago the Actor model[1] was officially documented. The literature documenting benefits of an Actor model over a callback approach are easily found. For understanding the pain which callback-based systems often produce, a person can familiarize themselves with the Win32 API (disclaimer: this is a masochistic exercise not recommended for anyone).

1 - https://www.cypherpunks.to/erights/history/actors/AIM-410.pd...


Even if you don't go the actor route, if you're going to use continuation passing style as much as Node wants you to, having a language with better CPS support would be much nicer


Blocking threads.

People associate threads with shared-memory concurrency, which can be really hard to deal with, but shared heaps are not the important part - blocking is.

Blocking threads means that all your functions can call each other, all control flow constructs just work, and debugging is sane. With blocking threads functions don't need to belong to a special class of async functions that in general need to be called from other async functions.

It would have been great if node invested in allowing many concurrent blocking JS threads. To see what that could have looked like, see Dart's Fletch project: https://github.com/dart-lang/fletch


Now that we have ES6 generators, this is much less of an issue.


Generators don't help with the sync/async split.

Generators are just iterators, and are synchronous. The consumer asks for a value with next() and the value must be computable (even if that value is a Promise).

If you have an async function (one that returns a Promise or in the future a Stream), and you need to call it from a sync function, and somehow incorporate the future value into the return value of the sync function, you're stuck. You have to async-ify your functions all the way up the call chain.

With blocking this isn't a problem. We just need environments that make blocking cheap.


In my experience in practice there is no big difference, really, its just a matter of a few keywords.

As for "having to change an entire call stack", that just stopped happening after a while. The rule of thumb was that if the function is impure (e.g. relying on mutable state) then it will likely become async, too. That lets me plan ahead.


Anything other: blocking in an Erlang like system, actors, CSP, real continuations, async/await, etc.


What are the differences between what you reference as "actors" and "CSP"?


As a back end Python / Perl dev, everything in JavaScript seems this way to me.


> the massive instability has put me off from relying on it for anything core.

Documentation is quickly out of date, libraries seem to have incompatibilities which only become obvious when they don't work, there's no 'recommended' approach for even basic tasks.

--

Do you have examples? My counter to your anecdotal evidence is my own--we've successfully built and managed multiple projects in production, that service millions of customers, which were written in nodejs, and have been doing so since 2013. The issues with Io led to no discernible conflicts to our businesses.


I agree that it's possible to build successful Node apps, but of all the languages I use, it's the most difficult to keep up to date with and most likely to cause problems.

If you're reading the mailing lists, going to meetups, etc, these things might not seem like huge issues. But if you're just trying to run a small Node project as one amongst many bits of software in many languages, Node causes a disproportionate number of problems.

I've given a couple of examples below - Zombie switching to io.js, and the whole self-signed certificate debacle from a couple of years ago.


The Node ecosystem is a great example of the open source community. The large number of libraries released on npm, active participation of the community and frequent events around the world related to Node proves it. Besides of course the large list of companies using it.

Not sure about your problems with node and the 'massive instability'issues you've had, or if they are related to a third party library and not the Node core.


I meant stability not in the sense of 'frequently crashing', in the sense of the API and 'how things work' remaining the same over time.


It's 0.x.x for a reason, don't you think?

The leading 0 denotes probable instability in the API that devs are willing to take the risk for and act upon.

You're familiar with SemVer, aren't you?


Node.js has never followed SemVer. That's why when io.js forked they adopted 1.0.0 and now Node.js went from 0.12.x to 4.0.0...


Geez, that's a bit harsh, don't you think? I've used node on several high-volume production apps, and I haven't seen any instability. And, if the documentation is poor, then it's only poor forsomuch as one unwilling to delve into the code.

Node has its place, and it's not for everything, but for quickly whipping together sturdy, flexible web APIs, it's great.


> I've used node on several high-volume production apps, and I haven't seen any instability.

Not sure if this is what you meant, but I read the ancestor comment not as being about "will the process crash randomly?" but "will an upgrade break something in my app?"

It's just that I've seen whole comment threads go by with people tripping over this misunderstanding without ever acknowledging it.


Makes sense, on a rereading.

npm coupled with drastically and frequently changing libraries (and transitive dependency clashes) does make a hell of a dependency mess; I agree.

But, on the plus side, most node libraries are easy to modify. In fact, I have a private repository full of github forks that I've had to make over the years (and not had the time or patience to submit pull requests). Luckily, npm supports private repos easily.


Do you have an example of the out of date documentation? The Node.js built-in modules have remained largely unchanged (as far as my use goes) since around 0.6.0 if not earlier. Streams underwent some big changes but most heavily used modules were updated relatively quickly. This went far more smoothly than the Python 3 transition IMHO.


I didn't know what bad documentation was until I used Ruby. I'll take Node's docs over Ruby's any day.


> I didn't know what bad documentation was until I used Ruby.

Really? I mean, you probably could have made any other point and have it hold even the slightest amount of water.

OK, let's help you out on finding some good documentation on Ruby:

http://ruby-doc.com/docs/ProgrammingRuby/

http://ruby-doc.org/

https://en.wikibooks.org/wiki/Ruby_Programming

http://mislav.uniqpath.com/poignant-guide/book/chapter-1.htm...

http://rubylearning.com/satishtalim/tutorial.html


Documentation in Ruby is bad not because of the amount of resource available. But because the community's love on language's meta-programming.

Module are dynamically included. Methods are dynamically generated. The generation static document can only give a partial view of the running objects.


And btw that's one of the winning points of PHP. Let me find some great documentation for you: http://php.net . That's it, one link is enough.


Out of curiosity, what about the Ruby documentation makes you say that? I personally rank Ruby's documentation (at least for the core language) as being pretty damn good (only a few I can think of are better), and node's core documentation is, in my opinion, a great example of how _not_ to write documentation. A personal favourite is how the first line of documentation for ClientRequest and IncomingMessage is about what creates them rather than in terms of what abstraction they represent[1]. Also, nothing in the documentation documents the types of things. In that regard, the Ruby docs are a bit weak (with the type documentation being a bit more informal than I'd like), but at least they put in the effort to document it. Also, I personally find the code samples in the Ruby docs to be much nicer than those in the Node docs (though YMMV).

[1] - https://nodejs.org/api/http.html#http_http_incomingmessage


That really smells like FUD. Sass is Ruby for example not node. Ruby has been much more unstable than node AFAIK. And python hasn't even been able to move to version 3 so not sure it's the best example. I really like both python and Ruby BTW. Haters gotta hate...


As a Node coder I used to avoid SASS since it was Ruby-based; now there's a fully-featured SASS parser in Node that keeps up with the specification. Bootstrap is moving from LESS to SASS, and they've always had their build step in Node.


I am a bit curious about what "massive instability" you have in mind?


- Ruby: 20 years old

- Python: 24 years old

- Node: 6 years old

Maybe your comparison is somewhat unfair...


- language

- language

- io loop + vm


- language + environment + ecosystem

- language + environment + ecosystem

- language + environment + ecosystem


In that case "Node" is 20 years old, and "Ruby" (for web development) is 11 years old.


What?

It's fair to say Ruby didn't take off until 1.8 which came out in 2003, but how in any way is Node 20 years old?


The logic being used is that JavaScript is 20 years old.


This is why we won't use it for client work.

I'd use it for a personal project, or hackathon, but I'd be really cautious of developing mission-critical software in it.


What mission critical software do you write? Does your code review process require multiple meetings that go over a single line of code in excruciating detail?


Why instability, how many times your Node app shouted down in production? Your concept of instability is not related with software development.


The same episode happened with GCC and EGCS in the 90s. It's the power of free software.


Oh, I remember having to use kgcc on red hat to build a kernel because egcs couldn't... Recently worked on bits of ancient C++ that still require gcc 2.7 for that very reason.

I am so glad that gcc got its house in order. As a bonus the compiler got significantly better in pretty much all the ways it could (at least for users, no idea about internals.)


Wasn't there also one with Rails and Merb?


The more examples I see if open source in action -- that are similar to this one -- the more I start to realize what makes the `free market` special isn't the `market` part at all.

These interactions strongly mimic the interactions of the market. What's the true driving force?


Unlike Bitcoin Core / Bitcoin XT / BIP 120392?


Alright, now everybody stare at ffmpeg and libav!


Don't forget about bitcoin core and bitcoin-xt! People are screaming bloody murder about that, it will be very, very interesting to see how it plays out.


There are many ways a large fork can go in an open source project. A great description of the io.js/node split is: https://news.ycombinator.com/item?id=8884874

"Lets hope the spork gets spooned so that no projects get knifed. If not then I guess we have to hope for a knork so we don't get stuck with a couple of chopsticks."


Here is a timeline diagram showing how the LTS and Stable branches will work:

https://nodesource.com/assets/blog/essential-steps-lts/nodej...

From this NodeSource post:

https://nodesource.com/blog/essential-steps-long-term-suppor...


What happened with the version numbers? I could have sworn Node was on v0.10 just a couple of months back?


A group of contributors got tired of Node not using semver when the rest of the ecosystem does (among other gripes). They forked it and shipped io.js v1.0.0. In keeping with semver, they bumped the major version number on every breaking change. According to iojs.org, the most recent version is v3.3.0.

Joyent and the io.js team reconciled and merged their codebases, maintaining everyone's versions. Since io.js had used versions 1 - 3, the merged version number is 4.


No api stability promises before 1.0 is technically part of semver. It sounds like io.js should have stayed below 1.0 if they broke backwards compatibility three times in less than a year!

I always found the naming-and-proclaiming of semver in the javascript community silly. Old style C libs have been doing what semver describes for decades, and while the js community talks it up like crazy, the compatibility and stability is terrible (and as a result they find the need for private recursive sub-dependencies, and the resulting thousands of unique libs/copies is impossible to audit or patch).


The Node library is massive, and tightly coupled to V8. The changes that bumped the major version weren't major changes for most people, just for those who happened to be using the parts of the library that changed. Maybe that's a deficiency in semver (or the way io.js used it), but going to v4 in one year isn't as unstable as it sounds; it's just following Chrome's example of not caring how big the version number gets, so long as the semantics make sense.

I believe the io.js community was in the "ship or get off the pot" camp, and found it silly that Node was a critical piece of infrastructure at firms like PayPal while still technically not having had a GM release. By forcing the issue, they got Joyent to commit to major version changes twice a year, which sounds like a reasonable compromise between everything-changing-all-the-time and we-can't-have-new-things-because-legacy.


It's a good argument for adding another level in addition to semver: A human-oriented release number.

After all, when a project goes from 2.0 to 3.0, you expect something big to have happened -- not just some breakage.

Some projects use release names (e.g., Ubuntu and Android), but if every project starts using release names we'll have a lot of (mostly silly) names to deal with on a daily basis. Plus, there's no inherent ordering in names.


> ... but going to v4 in one year isn't as unstable as it sounds; it's just following Chrome's example of not caring how big the version number gets, so long as the semantics make sense.

Except that Chrome is mainly an application for end users and not a library, so the implications of "breaking changes" are a lot less severe anyway. In the parts where it does provide APIs (e.g. to websites or add-ons), the compatibility policies are a lot stricter than semver would require.


python also has a pretty big standard library. I don't have much specific experience before python 2.5, but I do know that code written for python 2.5 (released in 2006) is just about 100% compatible with python 2.7 which still has support and patch releases. Backwards compatibility was broken once in the last decade, for 3.x.

Or consider gtk+. 2.x was backwards ABI (binary!) compatible for over a decade. Compatibility was broken once in the last decade, for 3.x (for applications. for themes and windowing environments is a different story unfortunately.) So programs written for 2.10 or so still work today with (still continuing) recent releases of 2.x.


Joyent were incompetent stewards of the node project, so it was forked into "iojs". The iojs project adopted the semantic versioning system, where a new major version number indicates backwards-compatible changes (which mainly happen when they update the version of V8).

Eventually Joyent recognised they were on the losing side, and so they agreed to merge back together, under a new foundation which was set up with the help of the Linux Foundation.

Node.js v4 is the first release from the newly merged projects and they used v4 as iojs had already made it to v3.


*incompatible


that's not what he meant


> where a new major version number indicates backwards-compatible changes

lucio was probably referring to this part, where "incompatible" is indeed correct.


lol


Whoops! That's a bit of a glaring error, sorry about that.


NodeJS was forked into io.js, which continued ahead onto 2.0 and 3.0. Now the two projects are merging back and they are continuing with io.js version numbering rather than Node's, starting with version 4.0 today.


So many versions of this story as all the other comments under this one show.


"In parallel, we will be branching a new Stable line of releases every 6 months, one in October and one in April each year."

Nice, they're syncing up with the ubuntu release cycle. Should make updating servers a bit easier.


Presumably, this means Ubuntu will always be one release behind...


Guys, does V8 still deoptimize on ES6 features? For example, would using say const/lets in a function prevent V8 from optimizing it as a whole? That was the case some time ago when these features were still under a flag.


I was wondering this myself and I couldn't find a clear answer. If you find out let me know! :)


const/let in particular does not prevent optimization. You just need to use "use strict" in order to use them.


Can I ask for a link on that? Thanks!


The one readily available on search: https://github.com/petkaantonov/bluebird/wiki/Optimization-k...

Otherwise the fact that engines bailout on optimizing ES6 features was kind of common knowledge when engines started to implement them.


This is great news. I'm especially excited about the new ECMA6 features (arrows, etc.)


Right before the release they upgraded to the new V8 version 4.5. Having native support for arrow function. So exciting so start using them.


The new class syntax and arrow functions are huge, I'm in love.


Can't wait to start using ECMA6 natively


Yeah, this is rad, as I imagine there's some overhead to running the transpiled babel ES6 code compared to running the native ES6 features. At least some things may be faster - I remember lexical block scoping (let and fat arrow) causing perf issues in traceur.

Now I just need to be able to rely on these features being present in the browser so I can just write native ES6 everywhere and skip that build step entirely.


Not just performance overhead, but also the overhead of maintaining and configuring the extra bits of toolchain.


We've been using a concept of a "esnextguardian" to be what our package.json main points to, it then tries and loads the es6+ version and if that fails, fallbacks to the compiled es5 version. It's been working quite well for all our different Bevry projects. More info: https://github.com/bevry/base#esnextguardian


For a second I thought John Hughes' arrows were being introduced. Seems it's just the "arrow" in the syntax.


ES6 support is nice, especially if there is no performance hits for using ES6 features. I slightly prefer Typescript, but the extra language features in ES6 really make JavaScript development more fun for me. Thanks to the newly re-combined Node team!


Good news for you: ES2016 will support optional type annotations, clearly inspired by TypeScript. The idea is we shouldn't need a damn transpiler just to get the features most people want out of a language...


Interesting, can you give a link? I tried to find it, it's not here https://github.com/tc39/ecma262 Only mention is "Typed objects" but that is not TS-like annotation.


SoundScript is on track then? I thought it was just proposed (i.e. highly experimental).


When node supports `import` how you'll import node modules?

I hope it works like this

    import {readFile} from 'fs';
    import {clone} from 'lodash';
rather than explicitly providing "correct" URL to the modules.

EDIT: edited the syntax.


You'll be able to achieve that behaviour when node also supports destructuring.

    import { readFile, writeFile } from 'fs';
    import { clone } from 'lodash';


Not really. The destructuring syntax used in the imports is not the same as the standard array and object destructuring.

Section 15.2.1.16 of the current stable ES6 specification specifies how the import syntax is resolved, and it doesn't use destructuring.

P.S. Sorry too lazy to get a clickable link from the spec :P see http://awal.js.org/especser/#15.2.1.16 if you really wanna read it


As I understand it, that still involves importing the whole file, rather than just the part you need. If that is an issue, lodash is parted out by method, so you can do this instead:

  import clone from 'lodash/lang/clone'


Just a nit, but that isn't destructuring. Named import syntax has several differences, since name aliasing uses a different format and you cannot nest or provide default values.


oops I forgot the braces! ok then, it's cool. I'm happy it does resolving to node_modules and all that automatically.


It's just syntactic sugar for `require`, mostly. The import mechanism should behave the same.


Adding support for `import` will be a huge breaking change unless all existing packages have to opt-into it somehow. I don't think anybody has a good plan for this yet, apart from treating it as syntactic sugar for CommonJS.


The only problem are "default exports", i.e. overriding `module.exports` directly. In Node the exported names are just properties of the `module.exports` object but in ES6 the named exports are entirely separate from the default export. IIRC module.exports was not actually part of CommonJS although nowadays most people just use that word to mean "whatever Node does".


You can already use that syntax with Babel.


I wonder how that would work asynchronously. Or when the file is used from both the browser and node.


ES6 imports are declarative, so dependencies are resolved and executed before the body of the module is executed. That means you don't conceptually have to think about whether your dependencies are loading synchronously or asynchronously.


If you're building isomorphic code for the browser using webpack, you can quite happily chunk certain modules into separate bundles which will be loaded asynchronously when needed. It's pretty awesome.


This is crazy fast shipment..almost scary that you have to start keeping up with all the changes..

For anyone who wants to upgrade either from node 0.12 or iojs 3.30..all you have to do is re-install it with the GUI installer and you're good to go..for your personal computer at least!


Servers don't use a GUI installer :P



Server core has been an option for maybe 7 years now though.


Use your package manager to update then...


    $ cat /etc/issue
    Ubuntu 14.04.3 LTS \n \l

    $ apt-cache showpkg nodejs | head
    Package: nodejs
    Versions: 
    0.10.25~dfsg2-2ubuntu1 (/var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_trusty_universe_binary-amd64_Packages) (/var/lib/dpkg/status)
My package manager knows nothing of this mythical "bleeding edge" of which you speak...


Probably your best bet at this point is to use the NodeSource repos to keep up to date with the latest (stable) versions: https://github.com/nodesource/distributions



`nvm install v4.0.0` and you're ready to go.


To preserve your npm packages, try:

  nvm install node --reinstall-packages-from=node


Oh that's super helpful I didn't rtfm enough, thanks for the tip.


If you don't mind fetching package locations over HTTP

https://github.com/creationix/nvm/issues/614


nvm maintainer here: the only thing that's done over HTTP is the initial redirect on http://nvm.sh, which will soon be HTTPS. This URL is only used if you type it in (because it's easy to remember).

Everything else is done over Github's super-secure HTTPS - so there's really nothing to be paranoid about.


Ah, thanks for the update


nvm install 4.0 works, do we really need the "v"?


nvm maintainer here: you do not need the `v`, it will always be automatically added if you've provided a version number that doesn't already match to an existing alias.


Since it's the first and only release so far in 4.x.y, I got away with `nvm install 4`. ;)


Using `stable` is also another way.

  > nvm install stable
  v4.0.0 is already installed.
  Now using node v4.0.0 (npm v2.14.2)


nvm maintainer here: this is true - but please use `node`, since the concept of "stable" and "unstable" has died with the advent of node v4.0 using semver.

All node versions are now stable, and version numbers now communicate breakage, not stability. As it should be.


...all you have to do is re-install it with the GUI installer and you're good to go

And fix all the incompatible changes in your own code.


Unless of course..you are from iojs


It's backwards compatible and works fine from what I see


I thought changing the first number in SEMVER means breaking changes?


Mostly what is breaking in Node.js is the underlying v8 APIs. You're generally only broken if you've got a C++ module using these v8 APIs.


Lots of native modules are going to have issues until they are updated.


this is not true. Had problems with webpack and flux packages on old projects. io.js and node.js 4.x is not always compatible to old 0.12.x node.js !


Node 4 will become a LTS release in October.


No, the first LTS release is planned for October.


Excited to see this happen so soon. I was expecting it to take a bit longer.

I see it's still shipping npm v2. Any news on when npm v3 will be "production-ready" and the default version?


npm@3 "will be leaving beta very soon!" according to the release notes for 3.3.2.


Does anyone have a good source of information about the major changes between Node and IO? I haven't kept up with the community recently and I'm curious as to what the delta really is and what merging IO back into Node will mean practically.


The largest is a pretty big update to V8, it was one of the biggest features of iojs.



When I first saw the post about the two combining, never did I imagine it happening this fast. Congrats!


I might stay on 0.12.7 for a few months to let everyone work out the issues first.


That's probably the smart approach, in general, other people make awesome guinea pigs.


Exciting to see node moving so quickly


This marks a big day for Node - one which I and many others have been patiently waiting for, and one which still hasn't quite hit me yet :)

Big thanks to all those who made this possible!!


so now which apt repository to I use for node going forward?


Don't. I don't think Chris Lea is updating his anymore.

Wget the tarball and tar -C /usr/local --strip-components 1 -xvf <tarball>


Not so, Chris Lea's PPAs are now part of Nodesource:

https://github.com/nodesource/distributions/#installation-in...


Oh wow! Didn't know about this. Awesome.


I wonder how fast AWS will adopt it on Lambda?


Hoping all the main distros include this release as soon as possible in their main repos.


I hope not. Many libs are still not compatible with that.


the iojs site mentioned nothing about the merge as I checked yesterday, neither does nodejs site, which is a bit odd.

the new version number is using iojs instead of nodejs's existing version scheme, which is interesting too.

I just began a php device-configuration-management project and was strongly persuaded by a senior php developer that I should use nodejs instead, as he thinks nodejs _is_ the future and many big guns are using it for real deployment(netflix, linkedin,paypal...), just in time to try the fresh nodejs release for the new project.


> the new version number is using iojs instead of nodejs's existing version scheme, which is interesting too.

Well, its a follow-on to both pre-merge iojs and pre-merge nodejs, and has breaking changes to both, so the most SEMVER consistent version number is the first major version greater than the greater of pre-merge iojs's last version number and pre-merge node's last version number -- which is exactly what they chose.


Finally a stable release. Thank You Node Foundation.


Just curious - what makes it "stable"? We've tried 0.12, then went back to 0.10 for stability reasons.


No software is bug free, but AFAIK Node v4 was very thoroughly tested in a vast array of platforms and configurations, which provides certain degree of certainty, when talking about stability.

At any rate, if you ran into issues with 0.12, the best you can do is give 4.0 a go and report issues!


If you're concerned with stability, the best thing you can do is wait for the LTS release to come off the 4 branch.

A quick glance at outstanding issues will show a number of integration bugs are still outstanding, and the new v8 was landed only a few days previously - incompatibilities with new versions of v8 can take a while to surface in node.

It's important to remember that semver makes no promises about stability. While we're used to "N.0" meaning "stable and ready for upgrade," that's a non-semver idea that is explicitly disregarded in the semver release model. Node moving from 3.* to 4.0.0 only indicates that there are breaking changes(1).

If you have concerns about stability, you should take signals on that from the node LTS group, and pick LTS releases.

1 - whether or not this is good, bad, or merely a different permutation of the things that are turning your hair grey, I leave as an exercise to the reader.


Just curious..where are you tracking these outstanding issues?

In GitHub issues (for nodejs/node), I can see the 5.0.0 milestone has 5 open/1 closed issue; but my understanding is that the LTS release will be cut from 4.x; and 5.x is for rapid iteration post-LTS (i.e. changes that would have previously gone to io.js).

I recall reading that the 4.x LTS release is planned for a couple of weeks after 4.0.0 (which should be around the end of September, or early October).

I'm interested in tracking progress in the lead up to LTS, as this is when the node Homebrew formula is expected to bump from 0.12.x -> 4.x.


I wonder whether it might make sense to have a fourth number in semver. A.B.C.D.

A = Marketing number B = Breaking Change number C = Non-breaking change number D = bugfix number


What wasn't stable about 0.12? We've been running it in production since around April or May without any issues.


I think it means that the API is stable, it's not related to how well the software works.


Node's API stability is a wonderful and many-layered thing. While more and more of the API is moving towards stability in the colloquial "is this going to change between versions?" sense, it's still a mixture.

See this section of the node docs for a description of the different classifications of 'stability' within the API: https://nodejs.org/api/documentation.html#documentation_stab...


Those are classifications, but are they definitions? It's still not clear to me that "stability" has much to do with the software performing in a consistent matter. Either way it's a careless choice of wording, I've never encountered anybody who saw the word "unstable" attached to a node.js release and understood that it might refer to the API (see surrounding comments).


The arm support is great too. Great work from everyone involved, io.js was short lived but will have a lasting impact for the better.


All I can say is that after reading this thread, I'm glad that I'm not a Node developer.


You are missing out. Node is dope!


I remember when many people were complaining about PHP - lack of progress, inconsistency etc.

Now look what's happened to Node just in 6 years. And this is just the beginning.


I have no idea what you're trying to say.

PHP stalled in pre-6.0 land and then jumped straight to 7.0 because the release was just not going to happen. The closest thing in Node I can think of is ES4 (which failed, resulting in a jump to 5 and ActionScript diverging further).

Node stalled in pre-1.0 land with what was effectively a feature freeze before io.js split off and jumped to 1.0. Io then went on a regular release schedule strictly following semver leading to 2.0 and then 3.0. These aren't backwards incompatible in the sense that PHP 5 is to 4 or Python 3 is to 2 -- most code will likely still work; they mostly propagated breaking changes caused by updates to the underlying V8 engine, breaking some native extensions. The "jump" from 0.12 to 4.0 for Node is because instead of merging io.js back into Node, Node 0.12 was merged into io.js and io.js became the new Node 4.


Anyone know when Node will get IndexedDB?


That's likely out of scope for node.js. I'd recommend checking out leveldb: https://www.npmjs.com/package/level


Why out of scope? It's a DB api that has no browser/ui dependencies.

I see that the levelDB implementation you pointed to can use a backend that uses IndexedDB. So theoretically this levelDB Api could provide an Api that work across both browser and server.

But argh. IndexedDB is a standardized api. It has a usable open source implementation in WebKit. Why not just go with that?? Why create a different API that does the same thing? I've been building an entire app around IndexedDB, but now I have to port it to a different Api to run on a server? Why?


If you'd used leveldb from the beginning you wouldn't have this problem as it will happily work on the server and in the browser https://www.npmjs.com/package/level-js


Yes I see that. But if IndexedDB were supported on Node I also would not have this problem. So the question is, why can't node just support IndexedDB (which is formally specified in a standards document) instead of inventing an extremely similar but incompatible api?


Because IndexedDB is a W3C spec[0] intended for web browsers and LevelDB is a third-party npm module[1].

Node is just a JS environment. Implementing IndexedDB is as much out of scope as implementing XHR[2] or the File API[3]. In fact it provides the building blocks developers to implement any of these on top of Node should they need to (like node-fetch[4] implementing the Fetch API[5] for isomorphic apps).

[0]: http://www.w3.org/TR/IndexedDB/

[1]: http://leveldb.org/

[2]: https://xhr.spec.whatwg.org/

[3]: http://www.w3.org/TR/FileAPI/

[4]: https://www.npmjs.com/package/node-fetch

[5]: https://fetch.spec.whatwg.org/


This just begs the question. The question is, what about IndexedDB, XHR, the File API, etc. make them unsuitable for pure-JS environments?

Because there is nothing about the problem domains (indexed key/value store, asynchronous web requests, filesystem I/O) that are specific to web browsers.

If a problem domain is common across browsers and pure-JS environments, then it should follow that there can be common APIs. If some part of the API is necessarily specific to one or the other, then ideally these differences should be localized to small parts of the API.


That's an intuitive assumption but it's naive (i.e. it lacks understanding of the actual domain concerns).

JS in the browser needs to be sandboxed by default and has to handle concerns like cross-origin policies and interactively seeking user permissions. It also has some fairly browser-specific singletons (e.g. a shared global cookie storage).

The equivalent built-in node APIs are much more low-level, allowing developers to use abstractions that are useful in their problem domain.

Having an IndexedDB implementation in node core would be an incredibly pointless effort (most apps out there won't use it) and bring with it several complications (e.g. pluggable storage backends and concurrency conflicts if you want multiple node processes to share the same database). Plus it would mean the Node Foundation would have to get involved in the standardization process to make its concerns heard and likely introduces concerns that are irrelevant to everyone else (i.e. browser vendors).

Don't forget that Node is not a web framework. It's a JS runtime environment. It is primarily used for things that talk over the web or that generate content for the web, but it's not at all unreasonable to implement other things in it (e.g. mail servers). The web specs carry a lot of overhead that is simply unnecessary for most node applications even if it is perfectly necessary in browsers.

The only spec I can think of that I'd like to see in node is the Fetch API and for that we have node-fetch, which just wraps node's low-level http module.

What I'm trying to say is that node doesn't need these high-level APIs because it can give you the low-level APIs to implement them with. Browsers can't do this, so they need to work at an entirely different layer of abstraction. Plus node allows you to easily include native extensions whereas in the browser you can't have that (except for NaCl).


IndexedDB is standardized for IndexedDB, which is not at all designed for use on a backend.


What about IndexedDB's design is incompatible with running on a backend?


LevelDB is the db behind IndexedDB.



I agree with you that it'd be nice, but I don't think there are any plans for it.

I wrote https://github.com/dumbmatter/fakeIndexedDB which does run in Node (albeit really slowly and only in memory). It wouldn't be that much work to make it run on a real DB backend (LevelDB like Chrome, SQLite like Firefox, etc), at which point it would be everything you want. PRs are welcome :)


So, why is the link 404ing right now?...


Node on Docker Hub needs a new tag.

https://hub.docker.com/_/node/


Get an alpine based one, it has 10x smaller size. I made one with npm v3 and node v4.0.0 https://hub.docker.com/r/antouank/alp-node/


Thanks to docker's image layering, using a non-Ubuntu image may actually be "bigger" if you already use other Ubuntu images.


Same with Homebrew, por favor.


> ARMv6 32-bit Binary: Coming soon

Is there more information on what's holding back this build?


The fact that the little Raspberry Pi building it is still...well, building it! :)

https://ci.nodejs.org/job/iojs+release/168/nodes=pi1-raspbia...

It does look like it's very nearly done though, given that it's tar'ing stuff up at the moment.

The decision was made not to wait for it here - https://github.com/nodejs/node/issues/2522


Should have used a high-powered Scaleway server as advertised on HN a few days ago :-P


just finished building :)


yay


So excited!


0.12.7 -> 4.0.0

Best versioning convention ever :)


Actually it went:

0.12.7 -> 1.0.0 -> 1.0.1 -> 1.0.2 -> 1.0.3 -> 1.0.4 -> 1.1.0 -> 1.2.0 -> 1.3.0 -> 1.4.1 -> 1.4.2 -> 1.4.3 -> 1.5.0 -> 1.5.1 -> 1.6.0 -> 1.6.1 -> 1.6.2 -> 1.6.3 -> 1.6.4 -> 1.7.1 -> 1.8.1 -> 1.8.2 -> 1.8.3 -> 1.8.4 -> 2.0.0 -> 2.0.1 -> 2.0.2 -> 2.1.0 -> 2.2.0 -> 2.2.1 -> 2.3.0 -> 2.3.1 -> 2.3.2 -> 2.3.3 -> 2.3.4 -> 2.4.0 -> 2.5.0 -> 3.0.0 -> 3.1.0 -> 3.2.0 -> 3.3.0 -> 4.0.0

You just weren't paying attention.


If you include io.js, sure. Node did technically jump considerably.


Technically Node 4 is io.js 4 after Node 0.12 was merged into io.js 3.

So it's really 0.11.x -> 1.x -> 2.x -> 3.x -> 4.0.0. Except that 1.x, 2.x and 3.x weren't called "Node" at the time because Joyent owns the trademark and went on to release their own 0.x release(s) until the merge happened.


just follow the timeline of io.js




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: