Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you run parallel across many machines you will reduce the time, but the cost stays the same (roughly anyway).

The initial cost is what makes this insane, not the time, and the end result (GPUs) is undebatable the most cost efficient method (in this case very likely by a factor of 50-100x).

A situation like this is not premature optimization. Developers sometimes need to understand that in the scheme of things, their time is not worth very much compared to the cost of the infrastructure required to run their code. Throwing the corporate credit card at optimization issues is far too common today in tech.



On the flip side, we spent almost $100,000 in developer time one summer in order to keep a handful of customers from having to upgrade hardware.

We could have gifted them all $5000 machines and spent less. Plus, when you remember that the point of spending on developers is to earn back many times that cost in sales, the opportunity cost of that 3 months of very senior development effort was massive.


I think it depends on what said developer is working on, and what kind of infrastructure is under discussion, no? Optimising our web app’s backend has worth, so we do it, but only to a point and it’s certainly not a major component of our workload — our time is definitely worth more than we would save on our infrastructure, because our infrastructure is small to begin with




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: