Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> progressing them towards AGI

I don't see any reason to believe that LLMs, as useful as they can be, ever lead to AGI.

Believing this in an eventuality is frankly a religious belief.



LLMs by themselves are not going to lead to AGI, agreed. However, there's solid reasons to believe that LLMs as orchestrators of other models could get us there. LLMs have the potential to be very good at turning natural language problems into "games" that a model like MuZero can learn to solve in a superhuman way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: