Hacker Newsnew | past | comments | ask | show | jobs | submit | __monadic's commentslogin

Hi this is Alexis from weaveworks, just to say you can see how we support kubeflow here - https://www.weave.works/blog/kubeflow-and-weave-cloud


Hi, alexis richardson here from weaveworks. We have a berlin office and would love to hire people who want to work in our main areas. See https://www.weave.works/company/hiring/


"how are the end user groups requirements put forward? what mechanisms is there to ensure developers work on the defined priorities?" --> projects are run by their leads, they are not told what to work on. In this sense, CNCF operates more like IETF/ASF but with (arguably) less intrusive governance.

The underlying idea here is that a well-run open source project gets plenty of strong direction from actual users, who must be interacted with directly.

There is a still-forming End User Board designed to create a strong forum for some types of User-Project discussion. But overall CNCF will lean towards "voluntary" and not "mandatory" requirements.


Hi, I'm alexis richardson and chair of the TOC for CNCF. The answer is YES we allow competing implementations.


to elaborate for @hueving @mugsie et al., consider that OpenStack is organised around Nova, the scheduler++ that is at the heart of any OpenStack deployment. If the CNCF was "like OpenStack" then it could mandate that all projects are organised around Kubernetes, playing a role analogous to Nova. But we didn't want to be solely a "Kubernetes foundation". The market is early stage, and there are other valid approaches to orchestration, including Docker Swarmkit, Hashicorp Nomad, Mesos & DCOS templates like Marathon, and others. So, we need a different approach.

Of course there are people who want a KubeStack that is like OpenStack, for better or worse. That's fine too! We just don't want that to be the ONLY choice for customers.


Do you allow competing APIs for the same service? If not, how is that any different from OpenStack? If so, how do you address the issue of fragmentation across deployments?


Yes we do allow competing APIs.

You said it yourself in another comment on here: "It's never blatant, it's always calls for seemingly good things like extra pluggable points to make sure we don't favor particular solutions. Then it's making sure that any decision is brought to a huge vote by a giant committee that spends weeks arguing about if it's something they should even decide on, etc."

This kind of premature generalisation by committee is what has pulled OpenStack down; a situation from which it is now apparently recovering. CNCF seeks to avoid this, by encouraging projects towards interop but not in mandated ways.


Cool.

Do you ask projects to support all the implementations or just choose one?


Projects can do what they like. We believe that users, communities, market pressures, and so on, will drive good outcomes here. For example to date, all projects have worked to interoperate of their own volition. No committees were formed to achieve this.


(weaveworks person here)

If you check out the PDF that fons posted above, http://rp.delaat.net/2015-2016/p50/report.pdf, then you will see pretty extensive testing showing that Weave Net, flannel, and Docker Networking have similar VXLAN performance for unencrypted traffic. In all cases, it is good enough. Alas the testers were unable to get Calico working.

The question is: when do you want top performance for encrypted traffic? Most of users want encryption for the wide area or public cloud, and when they can't use a VPC. Our solution is pretty good for these cases. Obviously at some point we'll enable IPSEC too.


Widearea/public cloud & non-VPC use cases are my use cases.

I really wish this weren't true, but your solution is not pretty good yet. For now, if you need encryption, it's useless.

Machines spend their life handling packet overhead. Application performance suffers horrendously, and the scalability of the application goes from excellent to terrible.

Weave looks really good if you give it easy tests involving big packets. But if you give it a workload involving many small packets (which in today's microservices architectures is not exactly uncommon), it stops working.


What is a small packet here?



Nice work. VXLAN works well for cloud providers that can support it (all of the them afaik, except for Azure).


+1 to @jbeda, but let me offer some comments for the avoidance of possible confusion among potential Weave users.

Weave aims to deliver a completely portable network. In other words if you create an application using Docker container and a Weave network, that should be able to run anywhere. And all this should be 'magically simple'. Once you have decided where to run your app, you may wish to trade portability for performance gains. For example by using GCE networking (or Azure, or ...).

To date Weave achieved portability by sending some packets (inter host) through user space. This has big benefits in terms of ease of use eg. dealing with wide area networks, multicast and firewalls. But under load it performs worse than kernel-only models.

We now have Weave "fast data path" - http://blog.weave.works/2015/06/12/weave-fast-datapath/ ... This aims to deliver close to line performance with portability and extreme ease of use.

There may of course be yet more optimisations that end users wish to investigate. If you are willing to sacrifice some portability you could certainly make use of fast networks provided by a specific public cloud. We haven't yet seen a strong need to support this, but it is certainly a reasonable thing.

(Disclaimer: I work at Weaveworks)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: