Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I disagree with Linus Torvalds about C++ (2009) (johndcook.com)
72 points by piyush_soni on Dec 24, 2018 | hide | past | favorite | 92 comments


I've been trying to recruit good Modern C++ devs for almost a year and it's very very difficult. Even people with significant experience in their resumes usually fail to answer most basic questions like what is RAII or const correctness? In what cases would you rather pass around naked pointers instead of smart pointers? And so on. I shudder at the thought of these people writing C++ code and putting it in production. In C++ there are only few ways to do things right and lots of way to do it wrong. There is a gotcha virtually in every turn of the language, often not caught by compilers even with all warnings enabled. To top this off, static analysis and linters remains extremely poor for C++.

I understand Linus's argument that in large collaborative C++ projects driven by volunteers, you can easily end up with messed up code base littered with stupid mistakes. C is slightly easier to understand and deal with - relatively. Also note even fewer people actually knows what C++ features you can use it on bare metal or kernel code.

On a fun side story, Windows NT team initially had huge debates on using C++. Dave Cutler was firmly against using C++ to build OS if I remember correctly but the graphics group decided to be hip and modern and do everything in C++. After long delays and frustration graphics group had to revert to C.


> After long delays and frustration graphics group had to revert to C.

It was 1992.

C++ is old language, however the first version of C++ standard only appeared in 1998.

Before it happened, the quality of the compilers was mediocre. Early in my career (but much later than 1992) I have used VC++ 5 and 6, they weren’t good. They were good enough C-with-classes compilers, but try slightly more complex C++ and they just crashed.

The standard library wasn’t there either, instead there were multiple incompatible STL implementations.


I feel that I’m more skilled in C++ because I developed a large project in C first. Whether or not C is the language one ultimately chooses, it makes one aware of the tradeoffs and benefits of various coding styles.


Main advantage of C is that it forces you to understand what goes on under the hood. To many C++ programmers who didn't do much work in C, if you ask how the variable var1 bellow gets allocated, they will be scratching their heads. As a former C programmer, this is the first thing I try to understand - who is allocating what and who will be freeing it up and how. I don't like anything to be black box or performing magic for me. But many quick short and sweet courses on C++ completely skip this. There is in fact a book someone showed me which said that you don't need to worry about memory allocations in modern C++, just use smart pointers and everything will be taken care for them! Poor souls!!

  void f() {
    Vector<int> var1;
  }


Any C++ programmer worth their salt should know that a vector is implemented using begin and end pointers. I believe its do with the person than the language. The same person can miss nuances of C as well.


I actually think this illustrates the success of c++ as a (also) high level language: a relative c++ novice can produce reasonably correct code, by sticking to a small, straightforward subset, and following best practices.

I think c a novice seem much more likely to rely on undefined behavior and have an accidentally working program that leaks memory, fails to check the return value of malloc, close files etc?

That is, one can be sort of productive and "correct" in naive c++ - in c that's not really possible (well, one can be "productive" but correct is less likely?).


On the one hand, knowing an implementation detail is (or shouldn't be) very important; also, those can change. But on the other hand this is helpful in cases when you need to make a judgement as to the efficiency of certain constructs. For example, I find it is very instructive to learn about the actual implementation of the shared pointer.


Ahem... I think that the tipical implementation is a triplet of base pointer, allocated size, initialized size.


I expected it would be, and that’s how I’ve implemented my vector variants, but I checked the source code for libc++ and libstdc++. Both use two pointers.


Interesting. I do remember inspecting _M_begin and end in the debugger.

I can't be bothered to check the implementation now, but the allocated size must be stored somewhere. My first thought of inferring the capacity as the smallest power of two greater than the used size doesn't work as the vector size can shrink without affecting capacity.


The used size is end - begin; the capacity is stored in addition. That’s why sizeof(std::vector<T>) is 24 on x86-64.


Patent said allocated, not implemented.


I like to ask candidates to implement (simple subset of) standard facilities as interview questions.


I love C, but these days the thought of doing any serious programming in it is scary, like a nightmare where you would find yourself naked deep in the woods with only a stick in your hand.


I respect great C developers, and a lot of great software is written in it. I can just write an equivalent or better program much more efficiently with less effort with C++ metaprogramming. I also think macros are a valuable tool for the language and wouldn’t choose for them to be removed because it allows for concise shorthand.


I don't think you need to learn C to become a good C++ programmer. But knowing it (or the low level subset of C++) will make you a better programmer.


Isn't this more of a reflection of the candidate and not of the language? Especially since the questions you mention are not "modern" C++ per se. I am in the same boat as you and have been looking around in the market for filling some positions in my team. My experience is also similar to yours but I attribute that mostly to the interviewee. Something like having "10 1 year experiences" than a "10 year experience". It is important in our field to keep up and at least look under the hood to see how things work.


In defense of the interviewees, the 10 1 year experiences issue is usually dictated by "architects" and middle managers chasing the new shiny tech stack. From the interviewees perspective it's self preservation to have a knowledge base a mile wide and an inch deep because most business just want to get something out the door.


I agree with your last point, but if you see it from the perspective of a project leader that has to pull in changes (like Torvalds) or hire qualified people, that still makes no difference.

If it's too hard to find good people or sift out bad code, even a technically good language is not helping you with your real task.


For languages like Python, C# its not as often to find candidates who can write syntactically correct code that produces the required output but the code would leave you in absolute horror and you would pray for the candidate.


> Even people with significant experience in their resumes usually fail to answer most basic questions like what is RAII or const correctness? In what cases would you rather pass around naked pointers instead of smart pointers?

I'm not claiming to be highly skilled with C++ (or any language for that matter), but I've spent several years as an undergraduate TA for the year-long intro to programming series which was taught in C++. I read the textbook over several times, helped create homeworks, exams and study material, and not once have I heard of RAII or naked/smart pointers. In fact, I've never heard the word 'correctness' in reference to the idea of const variables. Is my education faulty, or is it disingenuous to call these ideas 'most basic'?


If you have done all these within past 3-4 years then I’d say your educational institute has failed you. Today’s C++ programmers should be very familiar with Modern C++. Any new C++ projects should most definitely be using C++11 or newer. RAII is one of the most fundamental concept that C++ students should be intimately familiar and proficient with. Const correctness is also one of the fundamental tools that I would not live without. You should write to your professor with comments here.


I graduated 3 years ago, so that's close to right. I imagine my professors would say that the comment posters are out of touch with the contemporary University environment. And then the comment posters might say that the University is out of touch with contemporary industry needs from graduates. And I'd say you're both right, but here we are.

I graduated from one of the better engineering schools in my West-coast state.


Most universities are still teaching pre C++11, I believe. The terms you mentioned are much more relevant in modern C++, C++11 and onwards. Pre C++11 is more C-like, so I could understand them teaching it as it covers a breadth of C and C++, but new projects should be using C++11 onwards.


Honest question: what sort of C++ dev are you looking for? I'm curious how C++ is being used in industry these days. I'm re-learning C++ after 10+ years away. I'm tired of how squirrely Python is and how much extra work I have to do in C.


> what sort of C++ dev are you looking for? I'm curious how C++ is being used in industry these days.

Not parent commenter either, but by far the most common answer is: "the sort of C++ dev who is happy with maintaining incredibly clunky codebases that are old enough to drink alcohol in the U.S. If you're in a well-defined, niche sector like gaming, embedded, low-latency programming etc. the code won't quite be old enough to drink, but it will still be really, really clunky, and nowhere near a "modern" standard like C++17.


Not the parent, but my answer would be, the sort that actually know the language, appreciate and use the whole range of its power (rather than think of it merely as "a better C") and yet do not always write code as if it was Java. A good C++ programmer should be able to consciously switch between two mindsets - one where you use C++ as a very high-level language, similar to the way people would use Python or Java, and one where you realize the need for new a abstraction and are able to implement it.


C++ is dominant language in space of game engines, simulations, code that runs on low power devices, inner core of virtually all deep learning frameworks. We have been looking for devs who can work in all these domains. At least in our case, the code base is pretty new, no legacy maintenance stuff.


This sounds unappropriate(im not trying to attack anybody), but my opinion is that no programmer would stick with C++ if he learned C++ good enough and have any quant of critical thinking. Many-many parts of C++ are just plain design mistakes that negatively influence the programming and operation.

Maybe you can find some good C++ devs in CAML, lisp, scala, rust comminities, but these people probably would not turn back to C++ without a strong reason.


On the other hand, "there are languages everyone complains about, and there are languages nobody uses." See, for instance, https://stackify.com/popular-programming-languages-2018/.


let's keep in mind that even though Torvalds said this from the kernel point-of-view, he agreed in porting his userspace app Subsurface from C with GTK to C++ with Qt.

https://liveblue.wordpress.com/2013/11/28/subsurface-switche...

Besides, there's really not much point in comparing, 1991 non-standardized "vector<bool> sounds like an awesome idea but first we have to find a compiler which implements inheritance properly" C++ with 2018 "you can compile template metaprogramming and constexpr stuff on commodore 64 and windows 10 and it works fine" (https://www.youtube.com/watch?v=zBkNBP00wJE)


exactly - it's that "system level" kernel point of view that's important - I write a lot of embedded/real time/kernel code and essentially the thing you have to avoid is "new" which is sort of the heart of the language ....

Why new? because you don't have a heap in the kernel, or practically on a system with 20k or ram, new is also bad in real time code because underneath it is malloc and (like printf/stdio) all the mutexes that protect code that is messing with the heap structure .... and that way lies priority inversions and heisenbugs and other dark beasties ...

(for a lot of real time code new is great at the beginning of time, but not on the fly while the world is running)


The Linux kernel absolutely does have a heap, most memory the kernel deals with is kmalloced.

As for small memory systems, even MISRA and the NASA guidelines say no memory allocation after initialization; having a bump pointer heap is very nice. Global operator new overrides are really nice in embedded systems too, I had one for the RTOS I worked on that let you specify memory region, alignment, etc and it worked by default for allocating any C++ object for free. Obvs you have to leave code behind that new's after initialization, or frees, but that's nothing new to small systems.


yes but it's explicit, you have call special kernel routines and manage the memory yourself, it's not hidden in some library somewhere causing 500 new/deletes outside of your knowledge .... I've certainly met programmers in the past who have not internalised that new/malloc are a couple of orders of magnitude more expensive than other language primitives like + .... and had to show them how to profile .... and discover that 90% of the time our large multithreaded app was spending was in malloc .... inside their bespoke strings package .... and that statically allocated strings on the stack and strncpy were perfectly good for 90% of what they were doing with strings and didn't spend any time at all time in low level heap mutexes


I mean, the C standard library internally mallocs a bunch too, which is (one of many reasons) why the kernel has it's own modified version of a subset of the C library. If it were written in C++ the same rules would apply. When I was the lead for a C++14 RTOS that was MISRA compliant I had to write a custom collections library that's much more explicit. It's not that hard.


You can overload operator new to not use the heap. You can use allocators.


Overloading new (for the purpose, of e.g. adding a thread-local allocator) adds implicit global state that isn't always easy to manage.


Sure, but no more than the standard allocator. Explicit allocator objects are a superior solution most of the time anyway.


Good memory management is never easy on a system level, so if one is overloading new, they'd better be comfortable around the complexities involved.


if you decide to use new/new[], you are bound to use the same allocator everywhere. c++ did not give you a chance to customize your allocation.


Another potential problem is that "new" can throw an exception.


In practice they don't on a lot systems. Last time I checked glibc baked in Linux's overcommit by default nonsense, just throws up it's hands and says 'malloc never fails'.


That's not exactly correct. You can set ulimit and malloc will happily fail. And ulimit is pretty standard tool.


ulimit on the resident set size hasn't worked since the Linux 2.4 days


There are 4 options: 1) allocation can't fail for whatever reason. So new won't ever throw.

2) allocation can fail but you do not care to handle it. Your application will abort on an exception which is the best scenario.

3) allocation can fail and you want to handle it non locally (for example in a constructor). Exceptions work well in this scenario.

4) allocation can fail and you want to handle it locally: new(nothrow) is your friend. Remember to check for '!= nullptr'.


Exceptions require special stack-unwinding logic, which may not be adequate for a kernel (?)


Sure (although there is work ongoing to fix that), but that's only relevant for point 3 and in that case 4 is an option.


> essentially the thing you have to avoid is "new" which is sort of the heart of the language ...

You can use placement-new syntax and ~Object() (direct call to the destructor), together with malloc/free, or any other memory-allocation primitive, just like in C. You don't have to use the inbuilt new operator.

Still, C++ should be considered a near-legacy language these days. Rust (even in its no_std subset) is progressing rather quickly.


> For example, once you’ve allocated and de-allocated C structs a few times, you realize it would be good to have functions to do this allocation and de-allocation. You basically end up re-inventing C++ constructors and destructors.

Conflation of resource allocation on the one hand and construction (a.k.a. initialization) on the other.

Good C programmers do not re-invent C++ constructors and destructors, because they don't think what C++ constructors do is a good idea to do.

While C++ allows to separate allocation and initialization ("placement new") that goes against the grain of the language and its ecosystem. You're still left with constructors new-ing nested things themselves, and you're still left with exceptions from constructors while a simple return value would be better.

> A typical C or C++ programmer simply will not write anything more efficient or more robust than the methods in these libraries if they decide to roll their own.

A typical C programmer will write something different in the first place. Something that doesn't take ages to compile. Something that doesn't do allocations behind their back. Something that doesn't spew out nigh unreadable error messages if they got a simple thing wrong. Something that doesn't encourage clever code that isn't doing anything useful. Something that doesn't require loads of boilerplate and duplication (thinking of const for example) for the simplest task.


Separation between allocation and construction is a fundamental part of the C++ language, performed by different functions.

I'm not sure why you think that placement new is against the grain of the language, it is what allows non-intrusive yet inline data structures in the first place.

Re constructors newings things, one still has the option of passing the preallocated objects via constructor parameters, or customize allocation via an allocator. The language itself doesn't have a preference.


Resource Allocation Is Initialization is one of the central themes in C++.

> one still has the option of passing the preallocated objects via constructor parameters, or customize allocation via an allocator

I'd rather just allocate_the_thing(); initialize_the_thing(); do_things_with_the_thing(); free_the_thing(); which makes it very easy to pull these farther away from each other, or to perform bulk operations over many "objects". In other words, I just want to do what needs to be done, when it needs to be done, and not have to build and use awkward semantically heavy trap doors (rvalue references?) to not do the systematic things that weren't a good idea in the first place.


The A in RAII is for acquisition and it is about ownership, not allocation.

You can do exactly those things in C++ (operator new or allocators, T::T(), T::~T(), operator delete). The language just gives the tools to distinguish a bunch of uninitialized memory from an actual T which holds its invariant (if any).


Ok acquisition it is. I don't even understand the difference. Wikipedia doesn't seem to make a distinction either.

Uninitialized memory is all I ever want. And I want to simply write to that memory when the time has come, not hide those writes in a T::T() somewhere in an isolated file with lots of braces and spaces and colons and only few things that actually happen. It's really hard to map out the codepath of a project with lots of constructors and inherited constructors 15 levels down the call stack. Same applies for running it in a debugger.


There is a good comment after the article: "The simplicity of C is more useful than the additional features of C++." -- (Sam Watkins)


> Conflation of resource allocation on the one hand and initialization (a.k.a. initialization) on the other.

a.k.a. RAII (Resource Allocation Is Initialization). Which is very often what you want anyway.


As others have said, it depends on your context. If you need to be able to maintain tight control over resource usage, for example, RAII creates challenges. It can also be a bit of a chore to think about if you're doing interop with other languages.

I've come to think of C++ as the tiktaalik of programming languages: It can live in two environments, and was an important evolutionary step in transitioning from one to the other. But, ultimately, it embodied so many design compromises that it was never going to be ideally suited to either of them.


Just because it has a name doesn’t make it a good idea necessarily. What one calls “initialization” someone else might call a “side effect”.

In particular, sometimes you don’t want a bunch of allocations and locks firing on every function call and exit. C makes this behaviour explicit and C++ makes it implicit.


I can understand where Linus is coming from. C++ does indeed have STL and Boost into which much thought has gone. But they do not feel like an integral part of the language or a natural extension of it. The verbose, sometimes awkwardly complex syntax that results is also a problem. It may work for those who can stand this. But for others it is not so pleasant.

When using C++ I feel like too much of my time is spent having to understand how the abstractions work under the covers. I feel that this is somewhat pointless. Shouldn't a high level language free you from worrying about the details?


c++ is only a "high-level language" in comparison to c or asm. it should basically never be your first choice for a crud app in 2018. the point of the standard library is not to give you a bunch of abstract containers that "just work", but rather to give you many highly-optimized implementations of specific data structures to choose from.


> the point of the standard library is not to give you a > bunch of abstract containers that "just work", but rather > to give you many highly-optimized implementations of > specific data structures to choose from.

Are you talking about C++ or high level languages in general?

Either way you can come at this from multiple directions. For instance I like the way Josh Bloch designed the collection classes of Java, where you have some key types (given as interfaces) and multiple implementations as well as adapters. The focus is on the semantics and not really the implementation. For the most part you try to write code so it can take advantage of as many implementations of the same type as possible.

Rather than focus on implementation detail I think a standard library has two jobs: the first is to serve as an example of idiomatic language use. The second is to make the language useful - meaning that it should be possible to do practical things with only the standard library. Today such a practical thing might be to perform an HTTP request without having to go on an easter egg hunt.

I think one reason Go is becoming so popular is that it isn't obsessed with "purity". Remember Scheme? Remember how useless it was out of the box? It couldn't actually do anything.


> Are you talking about C++ or high level languages in general?

c++ specifically. if I don't care about the difference between, for example, a list, a vector, and a deque, I am probably going to choose a higher level language to work in. that said, you can write container agnostic code with templates and range-based for and/or iterators if you really want.

I agree with your other points. in particular, it's annoying to have to reach for libcurl every time I want to do simple http stuff, and I have fond memories of scheme from cs 101, as beautiful and unpractical as it is.


> that said, you can write container agnostic code with templates and range-based for and/or iterators if you really want.

You can, but people don't often do.


It worth noting that C++ changed a lot since 2009 and many scary things were deprecated. So far biggest problem is amount of legacy-written code at this point. Though in my opinion languages like Rust fit better than C++ or even C for writing kernels and runtimes.


> a lot of substandard programmers use it

That’s no longer the case. Substandard programmers have moved towards safer GC languages.

> STL and Boost and other total and utter crap

For many parts, yes indeed, but both are optional. Boost is not even a standard library.

> typical C or C++ programmer simply will not write anything more efficient or more robust than the methods in these libraries if they decide to roll their own.

Some parts of the STL are just horrible. Look at <iostream> header. Even with my experience, I will struggle trying to roll my own IO that’s less efficient or less robust, I don’t think it’s humanly possible.

Other parts work OK but just too slow. For example, I avoid using standard collections except strings and vectors: https://github.com/Const-me/CollectionMicrobench

> to limit yourself to all the things that are basically available in C.

C++ has a problem, it doesn’t have an ABI. When you’re writing complex enough systems with it, you either invent one on top of it (see how MS did it in Windows with COM, IUnknown + HRESULT, used in DirectX, Media Foundation, .NET, UWP, and many other libraries & frameworks), or indeed limit yourself to things available in C, plus just a couple extra like unique_ptr, string and vector.

However, on the lower level, in the implementation of these components, C++ helps a lot. If you’ll look at how MS implemented C runtime library routines like printf(), you’ll find they have used a lot of C++ inside, even templates to avoid code duplication caused by char/wchar_t versions.


Ancient flamewar topics are insanely boring


Agreed, unless the purpose is to extract lessons learned. I can think of several where that has not happened yet.


One thing I always do when they happen is read through the comments anyway, sometimes people bring up some rather insightful things about a language, editor or whatever. Having heard from an emacs user why he prefers it over vim made me really appreciate emacs. I'll use either vim or emacs though, just don't care personally as long as I can: open a file, edit a file, and save a file. I also use full blown "bloated" IDEs.

Of course I ignore a lot of comments, but try to go for the ones that seems intelligible.


What was it that he said about emacs?


It was some blogpost I can't seem to find, but they explained all the different things they can do from within emacs, which were lots of things vim doesn't seem to be capable of doing, things such as SSH'ing to a remote system to open files remotely, fully being capable of handling mail through emacs, being able to use emacs as a web browser as well which is interesting, and other things which were fascinating enough to me. There's also the built-in git functionality, which some people use emacs exclusively to manage their usage of git.

The emacs editor really is an Operating System to it's own extent. I really fell in love with the idea of being able to just work all day with nothing but emacs, I have not gotten to that point yet though. Being able to not leave my editor for anything sounds pretty useful if done right.


> I can think of several where that has not happened yet.

Tell me more! :-)


Monolith vs Microkernel.

I suspect in the long run that one will be settled in favor of the microkernel, but it will take a long long time to get there. Missed chance if there ever was one, but I can see some ulterior motives to keep the kernel as complex and hard to contribute to as possible. If everybody could write drivers then where would be the glory in that...


> If everybody could write drivers then where would be the glory in that...

Everyone can write drivers. It's one of the few things we can do in monolith-land. I guess I'm probably missing your point.


> I guess I'm probably missing your point.

Yes you are, but that's fine. I totally see why the difference between 'can' and 'actually does' is lost. The reason for that is simple: device driver writing on a monolithic kernel is something of a black art, even with all the tools we have at our disposal today. On a microkernel you'd just be writing any other user process, with access to a couple of ports and and maybe a special hook to handle hw interrupts, but other than that you would not be able to tell the difference. Debugging would be almost as easy as debugging any other user process. You could use a whole slew of high level languages to do your device driver writing. If the interpreters on the system would take care of the lower levels of interacting with the kernel to deal with IO and/or interrupts then you could do your device driver writing in interpreted languages.

But, you'll have to wait a little longer to be able to do that. Or try to locate a copy of QnX...


> On a microkernel you'd just be writing any other user process, with access to a couple of ports and and maybe a special hook to handle hw interrupts

This is a fiction. Hint: if a "user process" can wreak havoc on the system simply by misbehaving (in a way that may or may not be malicious), as drivers interacting with low-level hardware can, it's not a user process in practice, and should not be understood as one. The whole point of making that kernel vs. user space distinction in the first place is so that user programs can't wreak havoc on the system. There are plenty of things that can and should be done in userland (and oftentimes they are, see e.g. Plan9), but drivers are not among these unless you can always rely on something like IOMMU to make sure that the hardware cannot be made to misbehave in a way that affects security guarantees.


> This is a fiction.

No, it has existed roughly since the mid 80's. Source: programmed a whole bunch of stuff including very fast serial hardware drivers on systems like that.

The users process can not wreak havoc on the system, no matter how much it would misbehave in that context.

Hardware typically has pretty clearly spelled out bounds and limitations, users processes operate with less rather than more privilege than a kernel side thread would and so have much less in terms of ability to wreck the system. As an extra bonus: a microkernel is small enough that formal verification is an option.


You sound like someone who uses vim instead of emacs. So many keys for each operation!


It's not heavy as a complex operating system on the other hand. Also it handles latency on laggy connections much better.

P.S.: This comment is written just for fun. It's just an Nerf attack, a little play. Thanks.


If your argument for STL/Boost is "A typical C or C++ programmer simply will not write anything more efficient or more robust than the methods in these libraries if they decide to roll their own", it's a sign you might be missing the point :P


Linus may be fine with C, but I do find that using C for larger projects is cumbersome for its lack of quality-of-life features. Making ADTs with C is a pain, because you can't define a type and keep creating object instances of it. Lack of namespaces makes naming things convoluted. Exception handling is non-existent. The type system is a joke, C functions happily take arguments of wrong type at compile time and then segfault at run time. Lack of array bounds checking places greater demands on the programmer. Too much abuse of the bare bones pre-processor macros. Multidimensional arrays are very complex to use.


C++ fell victim of the incremental approach to building a language. There never was a big picture. Ten years after the language had been conceived it was realized that it was STL that gave C++ its life the true meaning. C++ is the result of a very logical evolution (of C), which makes it, contrary to the popular thinking, very difficult to argue against its features on purely logical grounds. And the evolutionary road it travels is all paved with good intentions. But we all know where such road leads, and I would prefer an intelligent design in the first place.


Wenn speaking with some people experienced in C++ I often got feedback along the line of: " For a good C++ programmer C++ is as good as any other language, no maybe even better ... it's just that good C++ programmers are so rare that they are irrelevant for any company."

Or with other words: While there are many reasonable good C++ programmers there are view which are really good due to the complexity of C++.


Trovalds is not comparing top C programmers to average C++ programmers. What he meant was -- a lot of people mess up with kernel code and some of them choose to contribute back. Most of these guys must be => average programmer. So if these guys used C++ to give back to the kernel then it would be chaos.


Wasn't BeOS written in C++? I've heard people were impressed by the quality of its code.


The userspace was, the kernel was not. Haiku's kernel is almost entirely C++, though, and people are usually very impressed by our code quality :)


I am not a great developer by any means but how can anyone claim that syntax-wise C++ is a good language?


While I'm nowhere near as talented a programmer as both Torvalds and Cook I disagree, perhaps with both.

We have tons of crap anywhere due to bad code monkeys, kernel included, and these days with actual computing power keep using low level languages is a nonsense.

I'm not a fun of Go nor Rust and I can't say how hard can be write a modern kernel in a functional language like a Lisp or Haskell (thou in the past we have had LispM) however IMO we need to use high-level language to develop the same thing with less complexity and much more readability. Sooner or later any complex project will became so complex that even in FOSS world nobody can really know it enough to master it's evolution.


The problem is that such a high level language hides the complexity, making it much harder to reason about subtle side effects.


Ancient Alto workstations or LispM are build around high level languages and they were FAR superior to any contemporary competitor... So technically IMO that's not a real arguments. Maybe a cost arguments but that's a really complex point to evaluate.


Lisp and Haskell use obligate GC, which is a non-starter in any sort of low-level or latency-sensitive programming. Haskell uses laziness, which makes memory allocations very hard to reason about. Lisp uses dynamic typing, which means you have to dispatch on literally every single variable access... There are lots of these hidden overheads - you can write a kernel in a higher-level language than C (see Redox as an example) but it really can't be the usual sort of quick-and-dirty application-programming oriented "higher-level language"


Lisp is strongly typed and uses dynamic typing. Compiled Common Lisp code absolutely does not dispatch on every variable access.

And there of course have been and still are operating systems written in Lisp without a single line of C. Some of them even run on decidedly non-exotic hardware like x86_64 or arm64.


>> Well, I’m nowhere near as talented a programmer as Linus Torvalds, but I totally disagree with him.

I think that says it all. On top of that, Linus is in charge of the project he works on and is allowed to run it as he sees fit. Given all that, I decided to stop reading. OK, that and I already have my own impression of C++ but this intro doesn't make me want to reconsider.


> On top of that, Linus is in charge of the project he works on and is allowed to run it as he sees fit.

The argument is not about whether Linus should be forced at gunpoint to adopt C++; it's about whether adopting C++ would be a good idea.

Deciding if something is good or bad is separate from deciding if it should be done or not. Nobody will stop you from e.g. putting pants on your head if that's what you really want, but we can have a discussion weighing the pros and cons of doing so, after which you can make a more informed decision if you choose (or not!) to listen to the feedback.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: