Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs seem to udnerstand language therefore they've trained a model of the world.

This isn’t the claim, obviously. LLMs seem to understand a lot more than just language. If you’ve worked with one for hundreds of hours actually exercising frontier capabilities I don’t see how you could think otherwise.



> This isn’t the claim, obviously.

This is precisely the claim that leads a of lot people to believe that all you need to reach AGI is more compute.


What I mean here is that this is certainly not what Dwarkesh would claim. It’s a ludicrous strawman position.

Dwarkesh is AGI-pilled and would base his assumption of a world model on much more impressive feats than mere language understanding.


Watching the video it seems that Dwarkesh doesn't really have a clue what he's confidently talking about yet running fast with his personal half-baked ideas, to the points where it gets both confusing and cringe when Karpathy apparently manages to make sense of it and yes-anding the word salad AK. Karpathy is supposedly there to clear up misunderstanding yet lets all the nonsense Dwarkesh is putting before him slide.

"ludicrous" sure but I wouldn't be so certain about "strawman" or that Dwarkesh has a consistent view.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: