Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Decent programmers are also context dependent. For example when ImageMagick's delegate functionality was envisioned (before 1999 - that's the earliest page I was able to quickly found that mentions the feature), IT security was not taken that seriously. Basically Mitnick was just arrested (1995), multi user systems were all guarded and watched closely by bastard operators from hell like dragons sitting on a mountain of gold.

And PHP was seen as harmless fun.

And malware was thought of as boot sector viruses and even though the first worm (1988, the Morris) worked exactly like what people are harping about (buffer overrun), no one cared, because C++ was slow to compile, and managed runtimes are evil.

And still no one cares. It takes some competence to stop using C, but it's easier to simply shout at those who point out the obvious.

And when someone brings out the good old Therac-25 everyone starts to fall silent. (Even if healthcare accreditation is very expensive, it's expensive exactly because the whole IT field is still full of unverifiable code.)



> For example when ImageMagick's delegate functionality was envisioned ... IT security was not taken that seriously

Yes it might have been easy to get away with writing the OP's faulty specification into code back then. After 25 years of people screaming about the dangers of buffer/integer overflows, it's very hard for me to think that a serious C/C++ programmer could look at that specification and not see the issue before blindly committing it to code today. I suppose it could happen.

OpenSSL (which everyone else here is crying about as an example) is also over 20 years old.

> And still no one cares. It takes some competence to stop using C, but it's easier to simply shout at those who point out the obvious.

Well imagine trying to rewrite the Linux kernel today in a memory safe language, without losing any support for any of the current target architectures. Or NetBSD even more so for that matter. Nobody has convinced me it could be done, and as of yet, it hasn't been.

Throwing away C and C++ for good is fine - if you don't want to do much in the way right now of embedded programming, or working with sophisticated games engines (eg. UE4), just for fun.

If you're not confident of the quality of the code, then maybe don't deliver it to for all and sundry saying its fit for purpose. And if you are confident, then check it again. That's all I'm saying.


> Nobody has convinced me it could be done, and as of yet, it hasn't been.

Agreed. And even though it could be done, but it'd of course came at an enormous cost to the current pace of driver and other updates.

Google is trying something with Fuchsia though.

And there are experiments similar to how Linux itself got started. (Redox OS comes to mind.)


> Google is trying something with Fuchsia though. And there are experiments similar to how Linux itself got started. (Redox OS comes to mind.)

Aware of both of these, but yeah, wake me up when either are close to being capable of being even a niche competitor, for example, as complete and accessible as Haiku currently is.


What's Haiku's offering? It's BeOS resurrected. It's in C++. (Which - IMHO - has too much baggage, and still lacking sane module support.) In development since 2009 and still lacking SMP, hm.


Oh nothing on a technical level, I agree with your critcisms there. Just in the sense that it's available now. It has package management and you can install applications, compile applications, run applications etc right now. Unlike Fuchsia and Redox AFAIK.

Most of the BSD's didn't have decent working SMP for a period longer than that either.

I think it's kind of sad that it takes a company the size of Google to create something like Fuchsia, which we can't be sure will ever see the light of day for most users, even given the vast pools of code available in Linux and BSD they could reference.

Hardware has grown exponentially in complexity since Linux was initially developed of course, which makes the job of writing a new OS much tougher. But you could probably find 20-30 different hobby OS's over on osdev.org that are more complete and accessible for the end user than Fuschsia or Redox right now, even if they are just more boring POSIX clones.


> Hardware has grown exponentially in complexity since Linux was initially developed of course, which makes the job of writing a new OS much tougher.

Though, interestingly hardware also got a lot smarter (because usually they have some kind of microprocessor and RTOS on them), so in theory interfacing with it should be more easier. (Serial buses, simple enumerate, simple addressing, we have a lot of good abstractions, no need to fiddle with registers, just use a PCI-E, DMA, I2C library.)

But of course at the same time just supporting these smart standards is hard (because initialization of things is not trivial, even just getting DDR RAM to work requires tinkering with MSRs, and ACPI is its own special hell).


> In development since 2009 and still lacking SMP, hm.

Er ... what? Where are you getting this information? Haiku has been in development since 2002, and has SMP support since ... 2003/2004?

NUMA support is a little shaky, but I know users who have run Haiku on 32-core Ryzens with no issues. So we definitely have SMP support.


Wikipedia. And late night misreading. Sorry for causing confusion!

Now re-reading it, the article clearly states that development started in 2001, and that it has rudimentary SMP support.


Yes, I don't know why, the SMP support is far from "rudimentary". I don't think we ever had a Giant-lock, and our scheduler is pretty good, too (runqueue-based, O(1) w.r.t. threads, O(log N) with respect to cores, fully utilizes ACPI topology information.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: