Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This argument comes up every time somebody suggest an unpopular tool, and I think it's inherently flawed. The core problem is that popularity does not imply--and does not even necessarily correlate with--quality! Just because something is in some real sense better does not mean people will adopt it.

Most people are not willing to learn something fundamentally new. They might be content going from something they know to something very similar, say Java to Python, but they will resist anything truly novel and different. And the others? They're mostly the ones already advocating Haskell or Lisp or Forth or what have you!

Also, people have some really odd reservations when switching to a new technology. They are often not willing to take any visible steps back: the new might be better in a whole bunch of non-trivial ways, but if it's obviously worse on some easily identifiable property, people will avoid it. A new language might have a long list of advantages, but if it has an inefficient default string type, or slightly wonky syntax, or a sub-optimal packaging tool or any other superficial but obvious shortcoming, people aren't willing to make the switch.

Another problem is that there are different kinds of productivity. There are "dense" productivity changes: if you have to write your own JSON library or deal with a broken build system, you'll be spending a contiguous amount of time on it. You'll have to devote maybe a whole day or even a week to getting around this problem. There is no way to miss this. On the other hand, if a language improves productivity in a "sparse" sort of way--say you spend 20% less time writing code and 30% less on debugging and maintenance--you won't notice quite as easily. And yet, over any reasonable time using the language, you'll come out far ahead even if you have to sink in days working around problems and libraries.

A particular--and particularly important--example of this is in learning. As I said above, one of the main reasons people resist new technology is that they don't want to learn. They're too busy to spend a whole week or even month picking up something new. And yet, learning time is essentially a constant expense. Productivity gains, on the other hand, are at least linear in how much you program. If a language makes your code simpler, the gains can even be super-linear (that is, you get more benefits as you write bigger programs). These will dominate any learning time as soon as you actually start using the new technology widely. And yet, since the amount of time spent learning is obvious and the ambient productivity gains aren't, people put a larger than warranted cost on the former.

Coincidentally, this does not only apply to programming languages. I've seen exactly the same sort of behavior in adopting any kind of new, non-trivial technology: Emacs, Vim, Git, Linux...etc.

In short, don't trust popularity. This probably makes me sound elitist (and, to be fair, I probably am), but it's just like music or literature: the popular stuff is usually not particularly good and the good stuff is usually not particularly popular.



Sorry but this argument comes up somebody suggests that unpopular tool is merely unpopular and not flawed.

Lisp is one of the oldest programming languages in existence, it's still taught in Universities and has been for over 40 years. Age and exposure and still not popular.

I'm not sure why you keep mentioning new technologies when the article is about old technologies and your examples (Emacs, Vim, Linux) are old technologies! The only example of a new technology, git, has seen an absolute rapid rise and completely changed the version control landscape in just a few years.

A new language is not going to have a long list of advantages -- it's going to a have different list of trade-offs. Because any language feature that is objectively good with no downside or trade-off has already been implemented in some popular language somewhere.

And if you can prove that you're doing something really different and really better but there's a strong learning curve people will learn it. Git is the perfect example.


While I on the whole agree with the sentiment that there are a lot of delusional people here thinking that functional is better even though it's constantly been the new fad and still hasn't caught on, this post is wrong and was proved wrong in the last 4 years.

Because any language feature that is objectively good with no downside or trade-off has already been implemented in some popular language somewhere.

But C# and now C++ and Java have all very recently and quite suddenly included lambdas and closures?

It took javascript, a hybrid language, to show language designers just how powerful those features can be. It took a practical application of the concepts in an almost OOP setting to allow people to understand just why they're so useful.


Not sure why you got downvoted there, I think your mostly right, or at least have a valid opinion. Maybe it's the word "delusional" :)


They were just wanting to be charitable and prove him correct; obviously only delusional people would downvote such a supreme comment!


Yeah, I didn't really mean it like that.

I think the entire time I've been on HN people have been saying everyone's just about to switch to functional programming. We've had a lot of advocacy for Haskell, Scala and F#, but no big switch. It's fairly obviously never going to happen at this point, but they still say it.

In the same time frame MVC has transformed web programming, with Django, Rails, Symfony & ASP.Net MVC all becoming a norm in web programming.

If functional programming really were that compelling, we'd have seen a similar switch by now. Instead what's happened is that all the main languages have adopted the best bits of functional programming and left the bits which make it hard to write large programs.


Lisp is a huge success, not because people have built "commercially significant applications" in it, but because it has expanded how generations of programmers think about programming.

I'm not defending Lisp specifically (in fact I don't like it very much.) It's that the notion that a language can be "better" or "flawed" in an unqualified or objective way really irks me. The viewpoint that the only reason anyone would ever program anything is to create production software is, in my frank opinion, an intellectually stunted one.


> Because any language feature that is objectively good with no downside or trade-off has already been implemented in some popular language somewhere.

This almost implies that there is no opportunity for anything new in programming language research, which I find laughably depressing, especially if you take "some popular language somewhere" to mean the popular industrial languages. It would be rather sad to think that Java (or C++, or C#, or Ruby, or Python, ...) represents the pinacle of programming language design, and we're stuck with it and its ilk from now on, forever, since there are no good language features left that are worth implementing.


It doesn't imply that. But I think the idea that programming language research always entails esoteric new, from scatch, languages with no tools is even more depressing. There has been a lot of research over thr last 5 decades and every year more of that research gets put to use. It wasn't that long ago that a cross platform language that gets dynamically compiled to native code and supports efficient garbage collection would be considered science fiction.

Progress is ongongoing but it's evolutionary not revolutionary - and that is a good thing.


I've been hearing exactly this argument since the early 2000's when I was myself a smug lisp weenie. The thing is though, that software is by and large a highly competitive business and any tool that offered the kinds of productivity gains people claim are possible from specific languages would enjoy fairly rapid adaptation. Witness the rise of Ruby & Rails for instance.

To quote John Carmack:

To the eternal chagrin of language designers, there are plenty of externalities that can overwhelm the benefits of a language, and game development has more than most fields.


While I'll agree that popularity is not everything, and that often people are stuck with overall subpar tools because those tools do one thing particularly well (php and application deployment comes immediately to mind), I don't think you can reasonably argue that programmers as a whole are so mired in institutional inertia that they can't move to things that are better for them within a relatively fast timeframe. The rapid adoptions of, for example: C, C++, Perl, Java, Python, and Ruby make it painfully obvious that that's not the case.


The difference between the popularity of Emacs, Vim, Git, Linux, and a programming language is that it doesn't matter if anyone else on the planet uses Vim Git, etc., they will work the same to me. But the difference in productivity from pre-made libraries for using Ruby (52,220 gems cut since July 2009) vs. scheme (http://planet.racket-lang.org/) is vast, especially when you consider that the power of the two languages is not that great.


> it doesn't matter if anyone else on the planet uses Vim Git, etc

How does that work? Unless you are creating and supporting Git, git isn't going to work if you are the only person using it. Even then, there will be no github. And I don't know about you, but I won't use Vim if I lose my plugins(directly correlated to how many people are using vim)


Why wouldn't git work? The vast majority of projects I use it for are ones I work on by myself. As far as vim, the only plugins I use are syntax ones and command T, which I could live without.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: