Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any particular reason you're building your own container system instead of leveraging LXC or Docker?

For the massively parallel workloads you find in data science, it seems like you'd benefit a lot from the wealth of container orchestration tools around Docker (swarm, Rancher/Cattle, Kubernetes) in order to easily scale out your functions. Especially when many companies already have these set up for their more vanilla applications.

This is an example I've seen that can leverage a docker swarm for invoking functions, loosely modeled after AWS Lambda: https://github.com/alexellis/faas



Hi there - we've actually built a lot of our container ecosystem around existing Linux tools, including `systemd-nspawn`, `btrfs` and more rather than creating the whole stack from scratch - and again this is all controlled from Haskell. We experimented with Docker, Kubernetes and more, but found they they made lots of assumptions about what was running inside a container that didn't mesh with our compute model, so using lower-level primitives worked better for us.

We're really lucky also to have one of the main `rkt` developers joining us soon to work on the container side.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: