Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GNU/Hurd in 2010 (gnu.org)
109 points by Tsiolkovsky on Feb 5, 2011 | hide | past | favorite | 34 comments


I think the future of microkernels is the L4 family: http://en.wikipedia.org/wiki/L4_kernel

L4 systems are much smaller and simpler than Mach: (7 system calls and a footprint of 12k, compared to 140/330k for Mach, according to Wikipedia). L4 has much better IPC performance.

It appears that some people attempted to port Hurd to L4 but gave up: http://www.gnu.org/software/hurd/history/port_to_l4.html

In that page they say "Eventually, a straight-forward port of the original Hurd's design wasn't deemed feasible anymore by the developers, partly due to them not cosidering L4 suitable for implementing a general-purpose operating system on top of it, and because of deficiencies in the original Hurd's design, which they discovered along their way."

There is no reference or explanation of why they considered L4 unsuitable for a general-purpose operating system, which is too bad, because it would be extremely interesting to read.


[deleted]


Can you point to a specific message?


I always felt that GNU continued Hurd development out of spite that no one would call Linux GNU/Linux.


My understanding is that Richard Stallman hasn't been significantly involved with the Hurd project for a very long time - it's being driven by the developers, who are motivated by their passion for the design goals of the project.

Basically, as soon as Linux existed and creating a kernel ceased to be an immediate need for the GNU project, the Hurd stopped being an "industrial" project and became a research project. The small team of developers that remains is trying to make a fundamental technical advance in operating systems; that's always fraught with uncertainty, and there's nothing wrong with that.


No. It continued because Hurd is very elegant and, for some people, this matters a lot. I'd love to run GNU/Hurd as my main OS, but, right now, that's not feasible.

Right now, you can reasonably choose between a Unix workalike that's GPL'ed, a BSD-flavour of Unix that's BSD'ed, the bastard child of VMS and a very elegant OS that's only sold bundled with fairly ordinary x86 PCs inside very elegant exteriors.


that "very elegant OS" is actually itself the bastard child of BSD Unix and Mach, dipped in unicorn giggles and butterfly kisses.


Having a Unix layer on top of Mach is very convenient from an application development point of view. This made the Unix userland able to run on the NeXT's OS we currently call OSX.

I am not sure about the relationship between NeXT's OS and OSF/1, but OSF/1 shared the BSD-on-Mach nature with it.


I didn't know unicorns were involved. Count me in!


The original post of v0.01 Linux (by Linus Torvalds):

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones."

He's referencing the project that became GNU/Hurd.

What I'm saying is that they've been working on it since the inception of Linux, and it's still not done yet.

(Found that post while working on a google summer of code project for Minix. I really really really want Hurd to come out because I'm pro-microkernels)


Yes, but they (gnu) gave up putting a lot of resources into it when Linux became successful.

They're idealists and realist...


What do you think of Minix 3?


An academically flawless OS, yet no programmer's nirvana on the other end.


Did someone ever analyze the 20year Hurd-History and write a book about it? I wonder what were the main factors that contributed to its (incredibly) slow development. Maybe there're lessons to learn here, there have to be. Edit: Typo, 20 years, not 27


They've been working on the important features first such as gopherfs


Once Linux became popular and useable, there wassup de.and for a floss kernel.


2010: from the article "... The Hurd doesn't fully deliver on the everyday usability goal yet ..."

1992: from comp.os.linux, Theodore Ts'o "... I am aware of the benefits of a micro kernel approach. However, the fact remains that Linux is here, and GNU isn't --- and people have been working on Hurd for a lot longer than Linus has been working on Linux. ..." ~ http://groups.google.com/group/comp.os.minix/msg/31a5ea16e86...


Can't wait til this is released so I can play Duke Nukem Forever on it.


This joke has an expiration date of May 3rd this year. Best to use it while we still can.


> This joke has an expiration date of May 3rd this year.

Call me Thomas, but I'll believe it when I see it...


Possibly this joke gets even better after DNF comes out. If they can make even Duke Nukem Forever real why canMt they ever release HURD.


Well played, Nate, indeed.


I certainly admire the perseverance.


A lot of things take more time than 27 years, but it is quite impressive to work for something for so long when they are getting nowhere.

It is also pretty stupid.


It's interesting to look at the advantages they list, and see (1) which are also covered by Linux, (2) which are not covered by Linux but are of questionable usefulness, and (3) which are not covered by Linux and are useful--and then to see if these could be brought to Linux reasonably.

    It's free software, so anybody can use, modify, and
    redistribute it under the terms of the ?GNU General
    Public License (GPL).
Linux already has this covered.

    It's compatible as it provides a familiar
    programming and user environment. For all intents
    and purposes, the Hurd provides the same facilities
    as a modern Unix-like kernel. The Hurd uses the GNU
    C Library, whose development closely tracks
    standards such as ANSI/ISO, BSD, POSIX, Single Unix,
    SVID, and X/Open.
Linux has this covered.

    Unlike other popular kernel software, the Hurd has
    an object-oriented structure that allows it to
    evolve without compromising its design. This
    structure will help the Hurd undergo major redesign
    and modifications without having to be entirely
    rewritten.
Not covered by Linux. I wonder if this is actually useful, though. In my experience when something needs to undergo a major redesign in an operating system, it tends to be a whole subsystem that needs the redesign--so even if it were object oriented, you'd be redoing the design of all the objects for that subsystem. Sure, you would be able to keep, hopefully, the same interfaces to other subsystems--but you don't need object oriented design for that.

For example, in the filesystem area, I believe modern monolithic kernels have well-defined interfaces between the filesystem subsystem and the rest of the kernel, so major redesign of the filesystem subsystem could be done without affecting the rest of the kernel. Within the filesystem submodule, they have well-defined interfaces between the generic filesystem code and the code that implements specific filesystems, again allowing for major redesign within the filesystem subsystem or the code for specific filesystems without affecting too much other code.

    The Hurd is built in a very modular fashion. Other
    Unix-like kernels (Linux, for example) are also
    modular in that they allow loading (and unloading)
    some components as kernel modules, but the Hurd goes
    one step further in that most of the components that
    constitute the whole kernel are running as separate
    user-space processes and are thus using different
    address spaces that are isolated from each other.
    This is a multi-server design based on a
    microkernel. It is not possible that a faulty memory
    dereference inside the TCP/IP stack can bring down
    the whole kernel, and thus the whole system, which
    is a real problem in a monolothic Unix kernel
    architecture.

    One advantage of the Hurd's separation of
    kernel-like functionality into separate components
    (servers) is that these can be constructed using
    different programming languages -- a feature that is
    not easily possible in a monolithic kernel.
    Essentially, only an interface from the programming
    environment to the RPC mechanism is required. (We
    have a project proposal for this, if you're
    interested.)
Linux does not have this covered. This is where it gets interesting. After all, even on a monolithic system like Linux, people have pushed some code to user space. A good example is FUSE. That's a kernel module that provides a bridge between the kernel filesystem code and user space code for specific filesystems.

I don't see any reason this could not be done for other things now in the kernel, such as the TCP/IP stack.

This would introduce performance problems, due to overhead of communicating between the user mode components and the kernel components, or between different user mode components, but microkernels like Hurd also suffer from that overhead.

To put it more succinctly, if you want a microkernel you don't necessarily have to start with a microkernel like Hurd is doing. Why not start monolithic with Linux and add microkernel aspects? It would be an interesting project to make a microkernel via a series of mutations of Linux.

It might be possible that way to end up with a kernel where you can choose whether each subsystem is handled in the traditional monolithic way, or bridged out to user-mode code, or both. You can then pick where on the performance vs. security spectrum you need to be for each aspect of your system, tailoring it to your particular requirements. That would be pretty cool--sounds like a fun project.

    The Hurd is an attractive platform for learning how
    to become a kernel hacker or for implementing new
    ideas in kernel technology. Every part of the system
    is designed to be easily modified and extended.
Hurd has the advantage here. Even a microkernel evolved out of Linux as I described above would probably be uglier to learn from.

However, I'm not sure this is important enough. There are systems designed for learning operating systems hacking, such as Minix. A potential new kernel hacker could learn on one of those, and then switch to Linux pretty easily.

    It is possible to develop and test new Hurd kernel
    components without rebooting the machine. Running
    your own kernel components doesn't interfere with
    other users, and so no special system privileges are
    required. The mechanism for kernel extensions is
    secure by design: it is impossible to impose your
    changes upon other users unless they authorize them
    or you are the system administrator.
Advantage Hurd. I question the usefulness. This would have been important in the time sharing days. Now we're for the most part the only user on our system--there's no one our changes could be imposed on! Also, if a microkernel-like Linux kernel were developed as described earlier, it would gain some of this automatically.

TL;DR version: most Hurd advantages over Linux either are in areas that aren't really important to most people, and for those that do matter it looks like they could be added to Linux.


> For example, in the filesystem area

A good example, because GNU Hurd has different view on filesystems implementation. Debian GNU/Hurd project has fairly nice explainations here: http://www.debian.org/ports/hurd/hurd-doc-translator

With GNU/Linux you're stuck with "classical" filesystems. GNU Hurd allows to have all sort of fancy things, for example to file to be both file and directory (which is totally impossible in Linux - and Linus strongly opposes any attempts to change this unfortunate state of things).

And it matters a lot to typical everyday GNU/Linux users. They frequently struggle (I, personally, do) with GVFS/KIO kludges (and other non-kernel "filesystems") - which GNU/Linux systems must have because even FUSE filesystems can't generally adequately replace them.

Edit: However, this not as flexible as Plan B's boxes (http://plan9.escet.urjc.es/ls/export/box.html).


Microkernels in general make for a great case-study on why arbitrary flexibility may not provide user value. Most of the OS doesn't have a need for "alien," radically innovative subsystem development - improvements to things like scheduling are mostly incremental in nature. Filesystems are one of the huge exceptions to this rule.


One of the most interesting aspect of Hurd is its security model. It's extremely safe, and allows very fine-grained authorisations.


Note that Minix has a microkernel, too.

Look up the Tanenbaum vs. Linus debate on the subject of monolithic vs. microkernels.


The only reason GNU/Hurd is still a popular subject of discussion is its historical fame. There is no concrete reasoning behind this project anymore as to why it may find adoption. It is not easy to write operating systems in the first place. Adoption is even harder today because the subject is very well explored and the solution has become a commodity.


I don't agree. With virtualisation systems such as Xen it has become easier to use specific operating systems for specific tasks, either as hypervisor or VM.

If the GNU Hurd people can prove that, due to its modularity, it is really that more secure than current OS approaches, I can certainly see some uses.


Interesting. I'll try Hurd under Xen, soon.


I gave up on Hurd back in the late 1980s. I'm not kidding. Before it was even called Hurd. I had heard for several years that GNU was going to make their own kernel. Eventually I gave up and stopped hoping for a release.



Hurd/Linux and Xanadu/WWW are classic examples of worse is better, minimum viable products, and perfection being an enemy of goodness.

Imagine what kind of world we would be in today if we had had to wait for Hurd and Xanadu?

Shipping is always a feature, and sometimes perfection at the cost of depriving the world of goodness is just the vice of excessive pride dressed up in other clothes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: