Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
CoreOS Vagrant Images (coreos.com)
125 points by NSMeta on Aug 2, 2013 | hide | past | favorite | 43 comments


I don't want to sound overly negative, but attempting to support every useful combination of hardware and drivers, particularly in any kind of supported, certified configuration, ultimately leads back to the same result: just running a real OS.

You can look at CoreOS the same way as the Xen hypervisor and VMWare ESX: the more flexible these became (new hardware drivers, supported configurations, etc. etc) the more they began to look like general purpose operating systems. At which point, why not start with one? (Kvm followed the same logic)

I'd much rather see the guts of CoreOS available as e.g. a RHEL or Debian package. As it stands, throwing away 20 years of distribution experience just to avoid installing a few files on my server seems far from worth it.


You just described the actual draw of CoreOS. CoreOS is just Linux. Instead of going the Xen route you just described, it can capitalize on being just Linux by supporting everything Linux already supports. At this point, you can view CoreOS, which is just Linux, as the "hypervisor." Containers don't have to replicate key subsystems in every paravirtualized instance, as containers share all the same subsystems like e.g. the TCP/IP stack. You're also not throwing away any distribution experience. Want CentOS? Create a CentOS container with Docker. Want Ubuntu? Create an Ubuntu container with Docker. These are incredibly powerful ideas I'm just now being exposed to.


Try telling your SAN vendor that you're running "just Linux" and can you have a driver and support package for that please, and let us know how you get on.

When you're done there, now try plugging your nice new machine into the corporate DC. Oh it seems it needs to support VLAN tagging. No problem, better just write some new code for that.

Time to migrate a bunch of performance sensitive services. Uh oh, no support for FusionIO PCI SSDs! Better write some more code.

Time to migrate your remote sites, only policy dictates certain services must be physically encrypted. No problem, better just cutpaste Debian's cryptsetup scripts and be done with it.

Oops, turns out we deployed 1000 machines with a duff BIOS setting. No problem, I'm sure the server vendor has a support package for CoreOS..

We could come up with examples until we've basically reinvented a modern Redhat/Debian initramdisk and boot environment.


There are certainly environments where enterprisey vendor testing and certification makes sense; mostly small or heterogeneous environments. But at large scale it's cheaper to do stuff yourself.

Also, CoreOS has more of a read-only stateless philosophy, so I think it'd still be interesting even if it ends up having to duplicate a lot of effort from mainstream distros.


That would have been a problem with every new distribution, including RHEL and Debian when they were brand new. In reality, your SAN should be able to accommodate a generic Linux kernel with any userland you can containerize.


Yes, and ideally Microsoft Windows should be open source and IBM OS/360 would have a build supporting Firefox running on my Nexus One, but that's not how the world operates..


Your sarcasm is tedious. If a SAN cannot support a generic Linux kernel, then it's the fault of the vendor for not allocating enough dev hours to the Linux kernel, and their product should not be chosen for a Linux environment.


I'm not sure how else to respond to "let's reinvent a complete Linux distribution because we've written a 20 line auto-updater".

This would have made sense to me a decade ago, but back then I just wouldn't have understood the amount of work involved. You simply cannot produce a fully generalized container OS without producing a fully generalized OS.

That aside, I actually like the central idea of a better approach to managing/updating a large group of host machines. I just don't think it warrants yet another bikeshedded support/security/certifiability nightmare (and lets face it, this is bikeshedding, there's absolutely no value in the core claim of "it's just Linux", but it gives unfettered license to reinvent the same old wheels all over yet again).


Just wanted to add something to this thread, which has not been mentioned yet. Sure, most (if not all) SANs are supported vie generic linux kernel drivers for HBAs, something like a QLogic card, or an exported iSCSI LUN. The issue is that you spent tons of money on the SAN and a truck load of money on hardware and software to hook up to that SAN. So you want to have a vender supported OS to go along with these items. If you have performance issues or stability issues, the first things a vender will ask is, what OS are you running, so they can direct you to the correct support group. The support group will want to debug the issue, so you have to generally run something like RHEL or Windows to get support. This might only be a mega corp issue though, since most startups don't have Oracle/Sybase, fiber channel SANs, etc. To sum this up, in the mega corp world, if you are not running a vender supported OS, you likely cannot get support for your device.


How is Core OS any different than another distribution in terms of hardware support? Isn't the point of CoreOS to BE the glue between software and hardware?


Alex from CoreOS here: This only supports virtualbox at the moment, but we are actively working on adding VMware. Fill out the form on this page if you want us to spam you when we have the vmware image (or others): http://coreos.com/


Why no VMware, Xen or Qemu? We can help you if you have any issues with making thus work you know :)


Any plans for Parallels?


Not at the moment. That said, we will be publishing raw images that should be portable to just about anything.


Vagrant supports VMWare, but not Parallels yet.


Oh? I just tried to get one with the instructions on their github page, and it said the box type was vbox. Are the vmware versions kept elsewhere?


He was talking about Vagrant (the tool CoreOS uses to distribute VM images)...

Also, I think you can just make a directory in the `~/vagrant.d/boxes/coreos` folder called "vmware" and copy over the vagrant file, box.ovf and the metadata.json. Then you just edit the metadata to have a vmware provider and the box.ovf to reference "../virtualbox/<<theimage>>"

I however do not have a vagrant vmware license and cannot test this theory.


Wow. I got voted down for this? You guys are great. :/


To anyone wanting to learn more about Vagrant via a screencast, I've created one @ http://sysadmincasts.com/episodes/4-vagrant


Vagrant for the uninformed: Headless interface to VirtualBox and VMWare VMs on OSX. Instancing is made a breeze.


Very good and simple explanation though I'd add Linux and Windows in there too.


ChikkaChiChi, that may be the single best explanation for what Vagrant is. Most people answer: "It provisions VMs" which isn't as enlightening as your answer!


It isn't specific to osx is it?


It is not! Works on Linux, Mac and Windows.


Ahh another Gentoo distro. Never ceases to amaze me how people troll it and yet it keeps popping out in every nook and cranny.

https://github.com/coreos/coreos-overlay/

Must have been a good idea that no one got.


AFAIK CoreOS is based on ChromeOS which uses some parts from Gentoo. That doesn't mean Gentoo was ever a good idea for general-purpose PCs or servers.


Nice burn on Gentoo. I've had the job of supporting Gentoo on a PaaS and have deployed it many places happily.

Sadly one of the dumbest arguments ever that will blacken the Linux horizon for decades, big dick syndrome over what Linux distro you run.


Where can I find the page which guarantees me that any given version of any given package will be fully supported by a defined security team for security fixes for at least 2 years from release, preferably 3+?

That is my argument against using Gentoo. I have absolutely no idea how long any given piece of it will be supported. If you can point me to a resource which explains that, I might take another look.

I also have no interest in picking and choosing packages out of a bucket - I want a stable, well-defined OS that I can build on, and preferably one which is as close to what everyone else is using as possible so I can ask for help from people who understand my OS.

Yes, it might work out if you're running at scale, have specialised needs, and can dedicate resources to what essentially amounts to development of a forked distro. It doesn't work out for me, as I need to know that the system I'm building isn't going to be unsupported in a couple of months, and I need to know I can talk to someone who is running similar versions of everything I'm running if everything goes wrong.


Linux is a hotel. There are the guests (users) who expect service. Then there is the concierge (distro) who smooths off the rough edges for the guests. Then there are the back-room employees (contributors) who keep things running. And behind them, the various leadership roles, e.g., architects, evangelists, and people who invent distros.

In this scenario, you are a guest. It seems you're ringing the bell at the front desk, and the unanswered question is: how are you willing to pay?


And to drop out of the metaphor, and explain why it's broken:

I thought one of the big points of open-source was that if something exists, and it does what I want it to, I don't actually have to contribute (pay) anything further than what I want to. I still will, if it benefits me or if I'm interested, though.

As it turns out, there's a number of freely available distros maintained by others who see personal benefit in maintaining said distros, and some of those do have well-defined security and bugfix support infrastructure and teams. Therefore, those fill my needs, therefore, there is no logical reason for me to expend more effort than is necessary.


In the HN thread earlier this week Gentoo was more accurately described as being part of the build tool chain. This makes a ton of sense to me and something I thought about doing if I ever wanted to "make my own distro".


You guys really should change the logo to a cookie, every time I see the url I think of oreos.


I was thinking it should be a chewed on apple core, but now I cannot unsee the oreos... so I vote cookie logo too!

Cool project


How about a chewed oreo?


This reminds me of Bedrock Linux, which allows you to run multiple linux distros on a single kernel instance. It does so using good old chroot, but has an implementation that is capable of breaking out of the chroot when entering a new one. This way the different environments become more integrated in that a RHEL binary can call a Debian binary and so on, creating a sort of super distro which is itself quite small and simple.


Simple question, but if there is no package management, how do you install basic programs? Download binaries for everything?


Simple answer: you don't. Longer answer: It depends. Do you want to use yum or apt-get to install programs? If you like CentOS, create a CentOS container with Docker and begin installing programs using yum! Ubuntu? Spool up an Ubuntu container and apt-get away! CoreOS is just the cradle upon which you can run the userland of any distribution you please.


So CoreOS is the simple host OS, thank you for the clarification, i thought it was a minimal guest OS.


I believe it's intended to be a minimal OS for running containers like those made by Docker (http://www.docker.io/).


This is not related to the Vagrant<->CoreOS topic but since Vagrant box image format is a packaged ovf image it shouldn't be too hard to port the CoreOS images to Vortex (https://github.com/websecurify/node-vortex).


What's the idea behind CoreOS? I found the site incredibly weak in information about what it is.


I've just begun looking into and playing with CoreOS. Basically, it's a minimal Linux image with Docker, service discovery using the `etcd` daemon they've written in Go, and several interesting properties. The most interesting is their use of the ChromiumOS build tools, allowing them to one day update servers the same way e.g. ChromeOS is updated--using the Omaha protocol to push raw binary diffs of the new root FS.


Who put containers in my Vagrant!?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: