What we need is reasoning as in "drawing logical conclusions based on logic". LLMs do reasoning by recursively adding more words to the context window. That's not logical reasoning.
It's debatable that humans do "drawing logical conclusions based on logic". Look at politics and what people vote for. They seem to do something more like pattern matching.
Humans are far from logical. We make decisions within the context of our existence. This includes emotions, friends, family, goals, dreams, fears, feelings, mood, etc.
it’s one of the challenges when LLMs are being anthropomorphised, reasoning/logic for bots is not the same as that for humans.
And yet, when we make bad calls or do illogical things, because of hormones, emotions, energy levels, etc we still calling it reasoning.
But, to LLMs we don't afford the same leniency. If they flip some bits and the logic doesn't add up we're quick to point that "it's not reasoning at all".
I agree that some people use intuition or pattern matching to make decisions. But humans are also able to use logical reasoning to come to conclusions.
Whether LLM is reasoning or not is an independent question to whether it works by generating text.
By the standard in the parent post, humans certainly do not "reason". But that is then just choosing a very high bar for "reasoning" that neither humans nor AI meets...what is the point then?
It is a bit like saying: "Humans don't reason, they just let neurons fire off one another, and think the next thought that enters their mind"
Yes, LLMs need to spew out text to move their state forward. As a human I actually sometimes need to do that too: Talk to myself in my head to make progress. And when things get just a tiny bit complicated I need to offload my brain using pen and paper.
Most arguments used to show that LLMs do not "reason" can be used to show that humans do not reason either.
To show that LLMs do not reason you have to point to something else than how it works.
If LLMs were actually able to think/reason and you acknowledge that they’ve been trained on as much data as everyone could get their hands on such that they’ve been “taught” an infinite amount more than any ten humans could learn in a lifetime, I would ask:
When coding they are solving "novel, unsolved problems" related to coding problems set up.
So I will assume you mean within maths, science etc? Basically things they can't solve today.
Well 99.9% of humans cannot solve novel, unsolved problems in those fields.
LLMs cannot learn, there is just the initial weight estimation process. And that process currently does not make them good enough on novel math/theoretical physics problems.
That does not mean they do not "reason" in the same way that those 99.9% of humans still "reason".
But they definitely do not learn, the way humans do.
(Anyway, if LLMs could somehow get 1000x as large context window and get to converse with themselves for a full year, it does not seem out of the question they could come out with novel research?)