Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why C++ is not my favourite language (snell-pym.org.uk)
17 points by alaricsp on Aug 28, 2009 | hide | past | favorite | 52 comments


I guess he's not a systems programmer. What language did Slava Pestov turn to when he recently re-wrote Factor's VM?

C++ programmers: we do the dirty work so you don't have to.


Indeed. So many of these articles can be boiled down to:

"As long as we limit the comparison to things (our language) does well, and exclude things C/C++ do that (our language) can't do at all, (our language) totally 0wnz C/C++".


Hardly, I know many people who spent years learning C++ in the 90's only to avoid it now. The reason is they had no love for the language. The C++ language design has serious flaws that whenever someone brings them up there's the usual, rolling of the eyes and the "yeah we know" sort of look. The if you criticise you must be a dumb attitude is encouraged by the C++ community too, and that was not borrowed from the C community: where the emphasis was beautiful tight code (like the K&R manual itself).

Lets face it C++ is something like 30 years old this year. For a 30 year language to have so many flaws, and for newer languages to make a better job of integrating OO while still keeping the power: you have to ask whether the C++ community really knows all about technology and nothing about design. The whole project has been mislead from start to finish. It strikes me it had to take a couple of ada programmers to write the STL.


C++ hits a certain sweet spot. In some cases - whether you like it or not - it's just the right tool for the job. Two examples I can think of are:

* VM implementations (I mentioned Factor, but also HotSpot, Microsoft's CLR, SquirrelFish, TraceMonkey, V8 and Opera's Carakan are all high-performance VMs written in C++ (x). Note also that all major layout engines are written in C++. Complex but fast and highly-tuned beasts)

* Production rendering. PRMan, MentalRay and pretty much every other renderer in day-to-day use on film and video post-production are written in C++.

People who write these systems care deeply about performance. They'll dip into assembly when they have to. While fast, C won't cut it because in these cases, it doesn't give you enough abstraction to avoid making your code a mess (strong typing, operator overloading and templates in particular, but also virtual method dispatch)*. I'm sure if you ask them, they'd tell you that they'd love to use something nicer and more modern. But it doesn't yet exist.

This is not to say that C++ is the right tool for writing web applications, or even the average desktop application, or even - increasingly these days - games. But somebody needs to write the stack underneath, and that stack needs to be fast.

(x) note Mono as an example of a VM that sticks with C. But it contains a lot of object-oriented shennanigans that probably could be better expressed in C++, and the core developers came from a strong C culture.


How about D? A friend of mine uses it as a better C++ for mathematical programming (optimization, graphs and so on).

Perhaps nobody would have invented C++ if D would have come earlier.


Yes, D shows a great deal of promise. Maybe I'm short-sighted, but at the moment it's the only language I can see being a viable replacement for what C++ does well. D was invented by Walter Bright, after he had written a C++ compiler (and somehow kept his marbles). I think it was his take on a better C++. Quite a few other clever people have jumped on board to help with the design since.


Two things:

- Of course there are lots of good things written in C++, there are also lots of dreadful things. This is not a reflection on the language so much but the talent of the developers involved. If nothing good had been written in the language in the last 30 years then it would be a real disaster.

- C continues to be used for very large projects, you only have to look at linux. One of the problems with C++ is the amount of bad programming there is. Perhaps this is because such poor examples are given by the in the Standard C++ Programming book.

There is of course objective-C, I wonder whether the first version of Doom was objective-C: there is a high possibility given it's NextStep lineage.

I'm not sure if I'd go this far but have your read: http://thread.gmane.org/gmane.comp.version-control.git/57643...


I think the Doom engine was written in C (and assembler), but the level-editing tools were in Objective-C. Again, the right tool for the right job - Objective-C's dynamic dispatch greatly helps user interface programming compared to bare C, whereas the Doom engine probably had no need for it.


Please never, never, never ever write C/C++.

С - simple and stable, C++ - huge and changing. C - portable and easily parseable, C++ - unpredictable and unparseable. C - good, C++ - bad.


I hear good stories about D from colleagues:

http://digitalmars.com/d/2.0/overview.html


D's great for at least two reasons:

1) it set out to be a no-compromise systems programming language

2) it's being designed by people with an deep and intimate knowledge of C++ (Walter Bright wrote a C++ compiler - no mean feat, and Alexander Stepanov was Mr STL)

These guys know the good, the bad and the ugly of C++ much better than most. From the stuff I've seen so far (check out Stepanov's presentations on an iterator-free STL, or adding functional purity, and also look at the work on making floating point more rigorous) this could finally be a worthy successor.


3) You can declare functions to be pure. A lot of optimizations become possible.


Actually, my reason for writing "C/C++" was simply because they're the two languages I see targeted by articles like these. It wasn't to lump them together as one; it was like writing "Ruby/Python".


So, you don't use a C++ compiler for your C code?

The old C infrastructure is really outdated already -- you can use C and a very limited subset of C++ (without OO), and be perfectly happy.


No, I'm using GCC.

How's it outdated? Linux kernel is written in it.

As for limited subset, people would push and push the boundaries until your code grows fangs and horns


It seems like a language like OCaml could do quite well in the VM implementation space: it has very low-level imperative constructs that can be used to optimize, yet also has very good features that allow for generic programming (modules, functors, type inferrance).


That's really cool that this guy thinks dynamic languages have the potential to be faster than C++. Maybe he can write a second article when they are actually faster, with some properly done benchmarks or some valid technical examples.


More than 10 years ago Jeffrey Mark Siskind posted micro benchmark [1] of his aggressively optimizing Stalin Scheme compiler versus then-current GCC. Stalin produced code 21 time faster than GCC.

I do not know why this fact remains so little-known.

[1] http://groups.google.com/group/comp.lang.lisp/msg/9801ba2edd...


Note the comparison is to C, not C++. The runtime improvement came from aggressive inlining.

I'm pretty sure that C++ code (which allows templates, function objects etc) can be written to do the same.

Of course, then we'll get into discussions about whether the C++ code is "idiomatic", requires "wizardry" etc.

The biggest win I see for JIT'd dynamic languages is their ability to optimize across source files. I wish C++ had the capacity to slurp in ALL of a project's C/C++ files and compile it one pass.

In these days of 16G developer desktops, is this an unreasonable demand? :)


In fact C++ is usually slower than C because parsing and comprehending it is so damned hard that compiler writers are satisfied when they can get that far.


Common Lisp is already quite fast and dynamic. And for the same number of hacker-hours spend in optimizing, you probably get a larger speedup. Also Python+C can be quite a fast combo. Fast to develop and fast to execute. Forth is also quite fast and dynamic, if you grok it.

A lot of functional programming languages have also become quite fast in the last decade. While there are in a sense more fluid than C++, they are normally considered static (and not dynamic) languages.

If you want to compare asymptotic speeds where programmer-hours invested goes to infinity, the lower level languages will probably always win. Like assembler, C or C++.

For benchmarks see: http://shootout.alioth.debian.org/


OCaml does quite well against C++ in the shootout. OCaml's Functors and type inferrence are a big win over C++ templates. Yeah, the syntax takes some getting-used-to, but all in all the language seems to hang together much better than C++ (perhaps not saying much)


I agree. I like Haskell's syntax better. But their syntaxen are nearly isomorphic for the most common stuff.


I'm finding OCaml's approach to be a bit more practical even if it isn't as pure as Haskell. Once in a while it's nice to be able to drop into an imperative style or even do some OO and OCaml lets you do all of that.


Yes. Though Haskell allows imperative style (via Monads), too.


I can't help feeling that Common Lisp should be doing better, considering that it's a language where you can effectively tell the compiler "It's OK to store this variable in a register". It's currently doing about the same as Mono, which IIRC doesn't JIT, and worse than Java.

The king of dynamic language performance right now is LuaJIT, which crushes Perl, Python, and Ruby and performs admirably relative to Smalltalk and Scheme.


The quality of the individual programs plays a large role. E.g. GHC (Haskell) has made huge strides in the past thanks to better libraries and implementations for the benchmark, despite nearly the same compiler.

On a side note, a lot of the benchmarks had to be reformulated after lazy languages got fast. As far as I know, a benchmark at this side prescribes which algorithm you should use. Haskell (and e.g. Clean) just ignored a lot of the baggage because it was not used any further.


> despite nearly the same compiler

As you don't say how far "in the past" maybe that's nonsense or maybe that's true.

> a lot of the benchmarks had to be reformulated after lazy languages got fast

That's nonsense.

Programs for one benchmark - binary-trees - had to be rewritten because "this is an adaptation of a benchmark for testing GC so we are interested in the whole tree being allocated before any nodes are GC'd" and with lazy evaluation GC gobbles up nodes before the whole tree's allocated -

http://shootout.alioth.debian.org/u32q/benchmark.php?test=bi...


Thanks for clearing that up. I wrote from my unreliable memory.


I love the shootout, by the way. Thanks for maintaining it.


Thanks to those who contribute programs.

Every week there are new programs, often from people who haven't contributed a program before.

One of these days someone will find a way to make effective use of all the cores for n-body, maybe.


I've found Python/C++ (via boost::python) to be a fantastic combo. There's also luabind which does similar lua/C++ two-step.


That day may not be too far off... I hope ;-)


The canonical anti-C++ link for me has to be http://yosefk.com/c++fqa/ - pretty thorough and reasonably well argued.


That is indeed a most potent collection of detailed arguments!

Now, let's just wait for my colleague to start ribbing me for keeping our project written in C again... ;-)


From my perspective, most of the stuff about C++ complexity rings untrue. Most people I know seem to know enough about C++ that they can produce pretty good code.

Yes, there's a lot of depth to C++ but you derive a lot of benefit from other's wizardry even if you're not a magician yourself.

Maybe this guy didn't invest as much time on C++ as he did on his other languages.


Perhaps, or perhaps C++ programmers who defend the language haven't spent enough time in other languages.

If you are just plugging components together I can think of better languages.

In my experience C++ demands more time that most languages to master. Which is a far longer time than, say, C.


I think that's not the right criteria though. Are most of the people you know capable maintaining other people's C++ code?

The idea that you can write (!) sane code in C++ by using a simple/sane subset is of course true. But the problem is that in a language of this complexity no two people are going to be able to agree on what that sane subset is. One person may love auto_ptr and use it everywhere, confusing her coworkers who don't get reference counting idioms, but love the STL...

By the time you manage to get your project's coding standards hammered out and enforced, and get it rigged up so it can talk to your external libraries that don't adhere to those standards, you might as well have given up and used C to begin with.


I agree. (Might I add that reference counting is evil (and slower than true garbage collection).)


FWIW, I'm working in C for my day job (and resisting calls to move up to C++), as my day job involves shuffling bytes around rather than particularly complex code; it's ~20KLOC. Then we build the tools to manage it in Python with a bit of shell scripting here and there. However, I code Scheme in my spare time, and would do the lot in that if I didn't then have to justify training new developers in Scheme! C's the right tradeoff for commercially developing a high-performance database kernel, IMHO.

I've done plenty of C++ in past jobs... which is why my day job is in C :-)


Is there an easy way to shuffle bytes and bits in Scheme? I was quite delighted when I found Data.Binary (http://code.haskell.org/binary/) for Haskell.


Yes... sort of.

SRFI-4 (http://srfi.schemers.org/srfi-4/srfi-4.html) provides a standard interface for dealing with 'homogenous vectors' (eg, arrays of unboxed integers or floats), which is great for shuffling bytes.

For bits, there's the usual bitwise-or and all that.

However, both could be improved somewhat; what the Factor folks have done with their struct arrays is IMHO superier.


I sympathize with the author's dislike for C++ and while you can't argue that dynamic languages "could" at some point in the future be faster than C++, it's very unlikely to happen since one of the requirements for this kind of speed is for a language to be statically typed. Without type information, the compiler is left with very few options to optimize the generated code.


one of the requirements for this kind of speed is for a language to be statically typed

No, one of the requirements for static optimization is that a language be statically typed. It does not follow that there are no ways to make dynamic languages fast, just that they mostly have to use different techniques.


You need to make languages fast on both dimensions, static and dynamic. A dynamically typed language can only be optimized in one dimension, which is why they usually trail behind statically typed languages in performances (since those are optimized along both axes).


I've not seen anything like the dynamic-dispatch optimisations the Self team came up with in the 1990s done in C++, nor problem-domain optimisations expressed as macros as is often done in Lisps, nor the aggressive constant-propagation of closures and their eventual inlining that Factor does.

Perhaps some of them could be applied, but a highly complicated base language with extensive mutation semantics (including pointer aliasing) probably mean they'll be so limited in their applicable scope, and so difficult to implement, that it's not an attractive activity for C++ compiler writers...


Do you know Synthesis OS (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.4...). The author made the OS specialize (and thus optimize) system calls during runtime (or something like that, it's been a while since I read the paper). The techniques might me applicable in higher-level languages than the assembler used there.


Yeah! Synthesis is quite excellent. I've been interested in exploring using the FORTH model of easily-accessible-runtime-compiler in that sort of context...


However it seems dead. Anything new on Synthesis (and related ideas) in the last two decades?


Other than JIT recompilation in VMs of various kinds (which is only indirectly related), no, not that I know of...


Don't you mean statically bound? I think quite a few C++ programmers get into the mindset that once the program has compiled that's the end of the optimisation. It depends on the system that is being run, but if the system has a long runtime: ongoing optimisation may bring about better performance over time. This is particularly true given how many bad C++ programmers there are (which is partly down to the language learning curve).

Strict typing was mostly about trapping programming errors at compilation rather than runtime. Perhaps something despite all the effort C++ has struggled to improve defect count through typing (C++ systems often fail for far more obscure reasons, and the strict typing can cause hideous compile-time errors).


Ousterhout of Tcl fame has a very nice (and surprisingly old) paper about this. http://www.tcl.tk/doc/scripting.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: