Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a deep topic that I don’t feel fully qualified to speak precisely about, but the key point is that knowledge isn’t about mapping symbols to other symbols like what language models do, it’s about mapping symbols/mental representations to reality.

I think knowledge is in fact a good word for ‘sensical’ language, but the point most people like me are trying to make when talking like this is that language models can’t determine what is ‘sensical’ language.

The amazing thing about these newer models is that they seem to get remarkably close to properly understanding real world concepts just through symbol mapping on massive amounts of data, but that’s highly dependent on having being fed sensical data where that mapping of symbols to reality was already done.

I’m trying to argue that knowledge only really exists in the data feeding step, when we map symbols to reality.

What the language models do is create a map of symbols. But they mix things which are knowledge with things that are not, and cannot determine what about their output and their mapping is actually valid knowledge.

I think most of why this is has to do with the nearly endless amount of embodied physical and historical context that goes into our own sense making. A surprisingly large degree seems to be embedded in language, but there’s a lot more that is not.

EDIT: I rambled, sorry; direct answer to your question about what I’d call the property that makes these language models useful is “relatively high degree of correspondence with reality”. What I consider “knowledge” is directly obtained by something which can interact with reality/directly maps reality to some representation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: