Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For quite many companies their whole "big data" dataset is small enough to be processed by a single beefy machine. In that case it's far more cost effective to simply plug in $1000 worth of extra ram in a workstation rather than spend some extra engineer-hours to do it remotely.


Couldn't agree more on the data size. In most cases beefy machine work. Would on-demand (cloud) make it simple?

Also beefy machine works for training jobs. But we need to deploy the models too.


With the many container solutions available today it's incredibly easy to move from dev to prod. You don't need to pay for a prod environment to do your development just to avoid ever having to migrate.


Yes, it is incredibly easy, except when you upgrade to tensorflow 1.6 and it fails with [a cuda error][1] and after couple of sleepless nights you realize nvidia has deleted the docker image of cuda version 70xx from dockerhub and you need to find the right commit that works from their git repo and build everything yourself.

[1]: https://github.com/tensorflow/tensorflow/issues/17566




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: