Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> because we're always learning, and we can always do better

I think this is a very important point, but it should IMO be considered along with context. Languages are always used with a particular tech stack, targeting particular hardware, all with its own performance characteristics. For example languages with garbage collection will have to have a different design from languages that use reference counting. And that's OK!

It's useful IMO to think of languages in terms of the use cases they're effective in. Some are good for resource constrained environments, others are highly flexible, others are great at expressing mathematical concepts. All those are unique use-cases and it shouldn't be surprising that the best languages for each are very different.

Where I think languages can run into trouble is where they try to be all things to all people. A universal tool sounds cool in theory, in reality they risk becoming mediocre at everything or developing what are affectionately named footguns. These steepen the learning curve and can also cause problems in real-world deployments of software in that language.



An analogy I like to use is prime vs zoom lenses on DSLR cameras.

A common observation by professionals is that when they have a zoom lens attached to their camera, the pictures tend to be either one extreme zoom or the other. That is, they wanted "as much as possible" and just set the zoom range to the maximum in that direction.

Which means that most of the zoom range is (almost) never utilised.

The big advantage of prime lenses is that by sacrificing the ability to zoom, the quality can be better. A 35-50mm zoom is never going to be as good as two separate 35mm and 50mm prime lenses.

So if you want to maximise quality, get primes.

But then your camera bag will be heavier, and your wallet lighter.

Also, you now cannot have any intermediate zoom range in those rare times that you do need it.

So there's always some trade-off being made.

Languages like C++ are like zoom lenses. They allow almost any programming paradigm, and various mixes too. You can have a procedural program with memory safety, with functional bits thrown in, and call out to unsafe C code if you want.

Languages like Haskell or JavaScript are like a prime lens, with a lot of decisions fixed at one extreme or another.

I suspect that what's needed is the zoom-like flexibility of languages like C++, but with defined subsets that work more like a "prime" language. With physical lenses, this is impossible. You can't take the zoom bits out, leaving the prime behind. With software... I think it can be done.

This isn't that unusual an idea. Mainstream languages are already slowly converging on this concept. For example, C# has a project-level "unsafe" flag, which turns a set of language features on or off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: