> I'm writing a response to you right now, and I "know" a certain amount of information about the world.
How do you know? And more importantly, how do you prove it to others? The only way to prove it is to say: "OK, you are human, I am human, each of us know this is true for ourselves, let's be nice and assume it's true for each other as well".
> But that doesn't really work for LLMs, because there's no knowledge at all.
How do you know? I know your argument saying that the LLM "is just" guessing probabilities, but surely, if the LLM can complete the sentence "The Harry Potter book series was written by ", the knowledge is encoded in its sea of parameters and probabilities, right?
Asserting that it does not know things is pretty absurd. You're conflating "knowledge" with the "feeling" of knowing things, or the ability to introspect one's knowledge and thoughts.
> As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares.
I'd argue thought experiments are pretty useless here. The smaller models are quantitatively different from the larger models, at least from a functional perspective. GPT with hundreds of parameters may be very similar to the one you're describing in your thought experiment, but it's well known that GPT models with billions of parameters have emergent properties that make them exhibit much more human-like behavior.
Does your thought experiment scale to hundreds of thousands of tokens, and billions of parameters?
Also, as with the Chinese Room argument, the problem is that you're asserting the computer, the GPU, the bare metal does not understand anything. Just like how our brain cells don't understand anything either. It's _humans_ that are intelligent, it's _humans_ that feel and know things. Your thought experiment would have the human _emulate_ the bare metal layer, but nobody said that layer was intelligent in the first place. Intelligence is a property of the _whole system_ (whether humans or GPT), and apparently once you get enough "neurons" the behavior is somewhat emergent. The fact that you can reductively break down GPT and show that each individual component is not intelligent does not imply the whole system is not intelligent -- you can similarly reductively break down the brain into neurons, cells, even atoms, and they aren't intelligent at all. We don't even know where our intelligence resides, and it's one of the greatest mysteries.
Imagine trying to convince an alien species that humans are actually intelligent and sentient. Aliens opens a human brain and looks inside: "Yeah I know these. Cells. They're just little biological machines optimized for reproduction. You say humans are intelligent? But your brains are just cleverly organized cells that handles electric signals. I don't see anything intelligent about that. Unlike us, we have silicon-based biology, which is _obviously_ intelligent."
You can figure out if someone knows what they’re talking about or not by asking them questions about a subject. A bullshitter will come up with plausible answers; an honest person will say they don’t know.
ChatGPT isn’t even a bullshitter when it hallucinates – it simply does not know when to stop. It has no conceptual model that guides its output. It parrots words but does not know things.
(Unless you're intentionally going on a tangent --)
The discussion is whether LLMs have "knowledge, understanding, and reasoning ability" like humans do.
Your reply suggests that a bullshitter has the same cognitive abilities as an LLM, which seems to validate that LLMs are on-par with some humans. The claim that "it simply does not know when to stop" is wrong (it does stop, of course, it has a token limit -- human bullshitters don't). The claim that "It has no conceptual model that guides its output." is just an assertion. "It parrots words but does not know things." is just begging the question.
Lots of assertions without back up. Thanks for your opinion, I guess?
How do you know? And more importantly, how do you prove it to others? The only way to prove it is to say: "OK, you are human, I am human, each of us know this is true for ourselves, let's be nice and assume it's true for each other as well".
> But that doesn't really work for LLMs, because there's no knowledge at all.
How do you know? I know your argument saying that the LLM "is just" guessing probabilities, but surely, if the LLM can complete the sentence "The Harry Potter book series was written by ", the knowledge is encoded in its sea of parameters and probabilities, right?
Asserting that it does not know things is pretty absurd. You're conflating "knowledge" with the "feeling" of knowing things, or the ability to introspect one's knowledge and thoughts.
> As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares.
I'd argue thought experiments are pretty useless here. The smaller models are quantitatively different from the larger models, at least from a functional perspective. GPT with hundreds of parameters may be very similar to the one you're describing in your thought experiment, but it's well known that GPT models with billions of parameters have emergent properties that make them exhibit much more human-like behavior.
Does your thought experiment scale to hundreds of thousands of tokens, and billions of parameters?
Also, as with the Chinese Room argument, the problem is that you're asserting the computer, the GPU, the bare metal does not understand anything. Just like how our brain cells don't understand anything either. It's _humans_ that are intelligent, it's _humans_ that feel and know things. Your thought experiment would have the human _emulate_ the bare metal layer, but nobody said that layer was intelligent in the first place. Intelligence is a property of the _whole system_ (whether humans or GPT), and apparently once you get enough "neurons" the behavior is somewhat emergent. The fact that you can reductively break down GPT and show that each individual component is not intelligent does not imply the whole system is not intelligent -- you can similarly reductively break down the brain into neurons, cells, even atoms, and they aren't intelligent at all. We don't even know where our intelligence resides, and it's one of the greatest mysteries.
Imagine trying to convince an alien species that humans are actually intelligent and sentient. Aliens opens a human brain and looks inside: "Yeah I know these. Cells. They're just little biological machines optimized for reproduction. You say humans are intelligent? But your brains are just cleverly organized cells that handles electric signals. I don't see anything intelligent about that. Unlike us, we have silicon-based biology, which is _obviously_ intelligent."
You sound like that alien.