Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> massively over provisioning just to support their peak loads during Thanksgiving and Christmas.

Not really true. As part of an annual capacity planning exercise each team was required to plan and scale for holiday peaks. Infrastructure is not "free" and each team has to optimize for good performance at an affordable cost.

> All that spare capacity became AWS.

Not true, never was. I regularly reviewed and provided feedback on the original narrative document for AWS. While I don't have a copy handy, I am absolutely certain that the document was focused on providing infrastructure services to developers.

> I bet their utilization rates aren't any different today ...quite possibly worse.

I don't have access to those numbers, and have no permission to share them even if I did. Your thought model for utilization needs to take the EC2 Spot Market in to account. Savvy users of EC2 have learned to optimize their large-scale compute jobs and their bidding process to gain access to what would otherwise be (to your point) underutilized capacity.

The recent "Gojira" run by Cycle Computing (details at http://www.marketwired.com/press-release/cycle-computing-sof... ) is a great example of how the Spot Market can be used to great advantage by clever developers.



I am sure they don't release numbers on how many "clever users" they have too.

The way I think about it is - can utilization rate grow at the same rate or higher than machine count at their data centers and for how long?

With all the levels of virtualization available and their market leadership today that curve can look quite magical I accept. But for how long? Seems quite a shaky curve to be betting 5 million machines on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: