Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not an expert, but from my beginner's view, unikernels feel like a hack to make hypervisors work as an app orchestration platform. Maybe this is useful regardless :)


In the MS-DOS days, I used to play games on the PC, and arguably, these were all unikernels. Typically, the game had its own set of drivers and was basically running in ring 0, and immediately after start it bypassed MSDOS and did most of the HW stuff itself. This was usually good for the game because it allowed it to squeeze every last bit of performance from the machine, being completely in control.

So that's definitely a possible use case.


What is a late '80s kid with a box of 5¼" floppies if not an app orchestration platform?


It's similar for for console games up until the sixth generation, also many earlier systems didn't even have a BIOS.

Unfortunately, today most graphics hardware relies on drivers for specific versions of specific operating systems.


> It's similar for for console games up until the sixth generation, also many earlier systems didn't even have a BIOS.

Mmm? Earlier and every console system generation have a BIOS even a Atari 2600 have it.

Unless I am misunderstanding your comment?


Are you sure it was MSDOS and not BIOS, or to be even more specific, BIOS video services?

MS-DOS is about files -- and AFAIK most games did not re-implement that part, nor they re-implement BIOS drive handling which was sometimes machine-specific. I certainly remember playing games from Novell Netware shared drive, which would be impossible if they re-implemented filesystem support.

And none of the games I know was taking over the boot process as unikernels do. It was always: start DOS, configure system with MSDOS's CONFIG.SYS / AUTOEXEC.BAT, and only then start the game. Not very unikernel-y at all.

Now BIOS video services were very common. BIOS "int 10" was great at setting video mode, but it sucked for everything else -- even in the text mode, we often read/wrote to framebuffer directly.

Not sure about keyboard -- the hardware is fairly simple, so it would be easy to reprogram... but also BIOS did a satisfactory job handling it, so it was OK as long as you keep interrupts enabled.

And as for mouse and sound card, those did not have any DOS drivers to begin with.


some games from 80ies didn't even try to retain MSDOS in memory. The only way to "exit" the game was to reboot.


Atari 8-bit games loaded from the boot sector so 'DOS' never even got loaded to begin with. There was still 'bios' ROM support for basic I/O (read sector, etc).


Kind of, but is that really a problem? Virtualization provides a layer of security and isolation that containers still haven't (and maybe never will). For a long time just the fact you could run multiple apps on a single server was a breakthrough and for the most part people weren't all that concerned with performance.

Now that it's matured, people are back to optimizing for performance, and this is one of the ways to do that while maintaining the security that virtualization provides. One thing that seems to remain true is that the pendulum of flexibility <> performance is always swinging back and forth in this industry.


May you elaborate why virtualization provides a layer of security and isolation that containers still haven't (and maybe never will)?


1) VMs have hardware backed isolation - containers do not.

2) Containers share the guest kernel. To elaborate many/most container users are already deployed on top of vms to begin with - even those in private cloud/private datacenters such as openstack will deploy on top since there is so much more existing software to manage them at scale.

3) Platforms like k8s extend the attack surface beyond one server. If you break out of a container you potentially have access to everything across the cluster (eg: many servers) vs breaking into a vm you just have the vm itself. While you might be inside a privileged network and you might get lucky by finding some db creds or something inside a conf file generally speaking you have more work ahead of you to own more hosts.

4) While there are vm escapes they are incredibly rare compared to container breakouts. Put it this way - the entire public cloud is built on virtual machines. If vm escapes were as prevalent as container escapes no one would be using AWS at all.


I agree, an argument for 4 is the fact that the hypervisor attack surface can be scaled up and down by adding/removing virtual devices. There is only a little set that stays permanently, like 30+ hypercalls on Xen. Overall compared to a standard OS interface (Linux has in the range of 350+ syscalls) this is still very little. The Solo5 VMM project tried even out another extreme by reducing the hypercalls to less than 10 if I remember correctly.


It's also worth mentioning that a hypervisor's API, like Xen's, is much more stable; the Linux one is constantly growing.


Very true. And we also did not speak about the heavily multiplexed system calls like `ioctl`.


> the entire public cloud is built on virtual machines

Some cloud providers will trust containers to isolate different customers' code running on a shared kernel, but it's not the norm. I think Heroku might be one such. There's at least one other provider too, but frustratingly I'm unable to recall the name edit found it, it was Joyent, who offer smartOS Zones. [0]

[0] https://news.ycombinator.com/item?id=25838037


Hence why everyone is now turning containers into micro-VMs, thus the sales pitch from containers is kind of waning.


I fully agree with the statement "to make hypervisors work as an app orchestration platform", although imo the real hack is using two full blown general purpose operating systems to run a single application which is precisely what most people are doing when deploying to the various clouds today.

Unikernels have many advantages and one is removing the complexity of managing the guest vm. People really haven't had the option of removing this complexity since creating a new guest os is time-consuming, expensive and highly technical work. When you have a guest that knows it is virtualized and can concentrate on just that one workload you can get a much better experience.


Agree that one of the major downsides of most unikernel projects so far has been how difficult/time consuming it was to create each unikernel -- oftentimes for each application. That's one of the explicit goals of Unikraft, to make it much easier, or seamless, to run a wide range of mainstream applications and languages.


VM machinery has been hardware optimized for a decade. Also, hypervisors have much reduced attack surface compared to a shared Linux kernel. They're better for long-running production systems than containers.


Not necesssarily, there can also just be performance and security benefits that result from not having to include all the random stuff you need for an more general purpose OS.


So much spin to unpack in that statement.

Aren't containers a hack to prevent issues in libc and shared libs from impacting application deployment?

Guest kernels are already "the application", unikernels remove overhead, everything else is basically the same abstraction applied to itself.


"hypervisors as an app orchestration platform" sounds like a good description of Kubernetes too (if we ignore the difference between OS-level virtualization and full VMs).


Well, I'm pretty sure this wasn't their intended purpose :). To some extent, they come from the observation that the hypervisor already provides strong isolation, and so having things like multiple memory address spaces, syscalls, etc just brings overhead. The second point is specialization: unlike a general purpose OS like Linux, each unikernel image can be tailored, at compile time, to meet the needs of a single (or a a few) target applications. As a result of all this, it is entirely possible to have unikernels that beat Linux in terms of boot times, memory consumption, and even throughput, as explained in the paper (they can even beat bare metal Linux).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: