Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Skimmed, some notes for a more 'bear' case:

* value seems highly concentrated in a sliver of tasks - the top ten accounting for 32%, suggesting a fat long-tail where it may be less useful/relevant.

* productivity drops to a more modest 1-1.2% productivity gain once you account for humans correcting AI failure. 1% is still plenty good, especially given the historical malaise here of only like 2% growth but it's not like industrial revolution good.

* reliability wall - 70% success rate is still problematic and we're getting down to 50% with just 2+ hours of task duration or about "15 years" of schooling in terms of complexity for API. For web-based multi-turn it's a bit better but I'd imagine that would at least partly due to task-selection bias.





I've found that architecting around that reliability wall is where the margins fall apart. You end up chaining verification steps and retries to get a usable result, which multiplies inference costs until the business case just doesn't work for a bootstrapped product.

,,1% is still plenty good, especially given the historical malaise here of only like 2% growth but it's not like industrial revolution good.''

You can't compare the speed of AI improvements to the speed of technical improvements during the industrial revolution. ChatGPT is 3 years old.


As long as people claim it's revolutionary it's fair to compare it to other revolutions.

I mean you can compare, but at the start it was also super small improvements.

The main difference is that people had no idea of the disruption it would cause and of course there wasn't there a huge investment industry around it.

The only question is about ROI of the investors will be positive (which depends on the timeline), not whether it is disruptive (or it will be after for example 30 years from now), and I see people confusing the two here quite often.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: