Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To what scale? A lot of tools out there help you schedule containers, but to how many machines, containers, and level of resource utilization? Perhaps I meant that it's _technically_ an orchestration platform, but it's not at the level of quality that is required for large-scale orchestrations (i.e. Borg, Mesos).

And yes, as noted above I work for Mesosphere. YMMV.



I think everyone is clear on the fact that after 9 months Kubernetes is not yet as mature as the 4-5 year old Mesos. But people love the project and the direction that it's headed in, to such a degree that there are partnerships forming, startups emerging and many many people contributing to the project. Many of us who will adopt Kubernetes are not looking to scale workloads across a Twitter or Google scale environment. We might have a few hundred machines or less, some might go beyond that but in all honesty I'd say for the time being people can resort to 100 machine clusters and manage multiple clusters.

Don't get me wrong, I really like mesos. I think it's a fantastic piece of technology and I really want to see it do extremely well. I just think Kubernetes is something a little more accessible to the wider developer community. Plus we get to see it grow from infancy and help contribute to that growth. AND its written in Go, which is a major plus.


It's a plus until it isn't :P


Maybe it's just me, but perhaps I like the company who actually wrote the scheduler powering the service you mentioned, has run it in production for over a decade and gathered petabytes of knowledge about how to refine it, has been working for several years on its replacement, and has woken up in the last couple years and realized that while their infrastructure has been their competitive advantage to this point there is value in the whole industry having access to to their technology. Maybe I like their expertise and I'm not ready to condemn Kubernetes and its quality just yet because it's young and who knows where they'll go. Maybe they'll separate Omega from google3 and land it in Kubernetes. Then what?

Mesos and Mesosphere talk a lot about scale, with the implication that the technology you're modeling powers the entire Google fleet in one big harmonious system. It doesn't. There's a fairly hard upper limit on the system you mentioned in terms of scale, which is one of the motivations for Omega. The system you mentioned also fits into an ecosystem from a DLM[0] to a RPC system to a build system and on up. Pieces of that are gradually being open-sourced, too, like Bazel. Google is clearly becoming aware that their infrastructure being generally available will benefit the industry enormously, and regardless of my personal feelings on their business model and exploitation of user data, that is a good thing. If you're telling me that Mesos can do better than all of that expertise, better than John Wilkes, better than all of that, to the point where you don't even consider Kubernetes on the same path as Mesos, I don't know what to tell you other than come back to reality. This seems to be a rut with Apache projects that model Google systems in some way: Zookeeper, Mesos, Hadoop, Flume. All of them look like if I described the systems in question over a very blurry fax, and the open-source community did its best. That's not outright criticism, it's just having the unique experience of having seen their Google counterparts; it's a good thing they exist, but there are absolutely opportunities to do better, Mesos included.

I get what you're trying to do with Mesos, but I strongly disagree with your characterization of Mesosphere as a datacenter operating system. It's disingenuous, and positioning Mesosphere as the only company who can do hyperscale in containers is a tough sell. Last time I used Mesos, in 2012, it could barely handle 100 machines and I had to write shell scripts to even get a simple static application scheduled. I then threw 10,000 jobs at it and the scheduler hung and took down the entire cluster. Clearly you have come a long way since then, but it's apparently easy for you to forget that given the age of Kubernetes.

[0]: You'll pardon me if the thought of Zookeeper doesn't excite me as the backbone of the entire system.


Mesos has come a long way since the early research. But so has all the competition.

I'm biased, of course. I work for Pivotal, we're the lead builders of Cloud Foundry. Our current generation of software can spin up hundreds to thousands of machines without too much fuss. Our next generation[0] is ... better.

And again, this is my bias showing, but I think our design for container workload scheduling is better than Mesos and Kubernetes.

We don't get much buzz on HN because we sell a complete PaaS to F500s. It's not very approachable on a personal level.

[0] http://lattice.cf/


The two things that matter are performance and reliability. Are you able to go into how lattice improves over either Mesos or k8s in these regards?


I was interested in the design decisions, but sure.

Lattice is a selection of Cloud Foundry components, principally the Diego runtime system, Loggregator log streaming system and Gorouter HTTP routing system.

If how software is written matters to you, then you might like the way we work. Cloud Foundry is Pivotal Labs DNA, scaled up. 100% pairing, every new line of code is test-driven.

We dogfood everything by running Pivotal Web Services as a public PaaS, which is always within a week or two of HEAD across the board.

Where we follow is that Kubernetes is extracted from real experience and Mesos has a head start on implementation.

In terms of design, I like ours best. Tasks and long-running processes are distributed using an auction mechanism, which means that there's much less global resource-status chatter required to make the scheme work. Diego is "demand-driven", meaning activity occurs when new demands are made. Mesos is "supply-driven", meaning state-affecting activity occurs when resources become available. They both solve a critical problem by pushing intelligence out to the edges, but to different edges.

I am much less familiar with Kubernetes, so I will avoid being any wronger than usual.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: