Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I sort of always assumed OpenAI was constantly training the next new model.

I wonder what percent of compute goes towards training vs. inference. If it’s a meaningful percent, you could possibly dial down training to make room for high inference load (if both use the same hardware).

I also wouldn’t be surprised if they’re overspending and throwing money at it to maximize the user experience. They’re still a high growth company that doesn’t want anything to slow it down. They’re not in “optimize everything for profit margin” mode yet.



Agreed. I wasn't necessarily thinking for cost optimization, simply for capacity purposes. Whether because they're using a bunch themselves (like you're saying, via training for example), or otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: