I am convinced GPT has crossed the big hump towards AGI. Before the current round of large language models it wasn't clear that any approach under consideration could approximate the human ability to generate a coherent flow of ideas. ChatGPT has this.
Yes there is a nontrivial step remaining to merge the purely pattern-matching inductive reasoning of GPT with a world-modelling deductive reasoning framework. But my assessment is the remaining steps are inevitable. The challenge that a few years ago seemed potentially insurmountable is already past. It is downhill from here.
Human-level intelligence by 2029. Not like a digital version of a conscious autonomous being and not necessarily super-human in many ways. But at or above human level abilities for most tasks, including things like programming. This is about 6 years out.
2045 we are looking at 1-6 orders of magnitude smarter than humans. To the point where humans without tight integration with AI will have no chance of keeping up or knowing what is going on, and in general developments will be beyond our ability to predict with our current intelligence (or fully comprehend).
2038. I've banked on 2038 being the real oh-shit tipping point. Just because, that would be pretty poetic. I'm taking over/unders on 03:14:07 UTC on 19 January 2038.
Can you do that, though? Can you tie the pattern-matching to the world-modeling deductive reasoning framework, in a way that preserves the logic of one and the "coherent flow" of the other?
(For the record, I would say that GPT has the illusion of coherence over short timeframes. But it can go long enough to fake out a human that isn't willing to invest in a long conversation.
I think that inductive, intuitive pattern-based reasoning is the big "hardware" feature of human brains. With GPT there is proof that AI can match the success of that kind of hardware. Human deductive reasoning is much more of a learned behavior that we do laboriously, more like calculating mathematical sums or evaluating chess moves. Computation and evaluating chess moves in a super-human way are of course already solved by AI.
> I think that inductive, intuitive pattern-based reasoning is the big "hardware" feature of human brains.
Plausible, I think.
> With GPT there is proof that AI can match the success of that kind of hardware.
I might go with "approach" rather than "match". So far, GPT hasn't shown enough to claim "match".
> Human deductive reasoning is much more of a learned behavior that we do laboriously, more like calculating mathematical sums or evaluating chess moves.
Here I think you underestimate humans a bit. Deductive reasoning isn't just this foreign mode that we enter, at least not for everybody. It's more like a knob that we can turn - using more deductive reasoning for this problem, more pattern matching for that one. And as we learn more about an area, we move more things from "deductive" to "pattern matching". (In fact, that's a big part of what we call "learning".)
At least initially, AI could have GPT and deductive logic, but talking over a really narrow pipe. That's more than we have now, but less than humans do, and less in ways that really matter (or so I suspect). It's going to take a major step to go from that to something closer to human thinking.
So let me understand this. You're saying that the primary feature of human brains is deductive reasoning, and that deductive reasoning is equivalent to calculating chess moves. Since ANNs are a lot better than humans at calculating chess moves, they must be an analogy to human cognition? Can we infer from this that ANNs are already cognitively superior to humans?
I feel like something is missing in this equation. :)
It's one of those things that is poorly defined and therefore conveniently perpetually a bit further out because we can keep on moving the goal posts. But we're at least being forced to do that now that we have chat gpt.
If you look at it from a "this is way better than what science fiction predicted", we're getting there. Hal 9000 is basically science fact now. That arguably happened about five/six years ago when MS had to pull the plug on tay bot which (like hal 9000) still had a few unfortunate glitches but wasn't a bad effort. Arthur C Clarke was only off by about a quarter century.
Arguably, chat gpt is a bit better conversationally than hal 9000. I wouldn't necessarily let it control my space ship yet. But then, those don't require a lot of AI and are mostly flown by computers at this point. We've explored most of the solar system without a single human going much further than the moon. The main issue is perhaps that it is confidently wrong some of the time. Which is true of humans as well, of course.
Chat GPT wired together with state of the art speech recognition and generation would be pretty doable at this point. It would be expensive to run though. But it would sound better and be able to have some coherent conversations on all sorts of topics. I find it interesting that openai is choosing to not demonstrate that. It must be very tempting to do that since they also have decent solutions for speech synthesis and recognition. That would instantly make things like Siri, Alexa, and "Hey Google" historical curiosities.
> What do you think cognition is, if not a coherent flow of ideas?
With ChatGPT, the flow of ideas is coherent to the reader of them (the user typing into the system). The "AI" itself has no understanding of the ideas, and does not "know" if they are coherent or not. That is why it is far from AGI. It is not a general intelligence, it is a general pattern matcher.
Being a little bit of a devil's advocate, how do you know you have an understanding of the ideas? How do you know the AI does not? Both of these questions are tough to answer. (I'll put a relevant ChatGPT essay in a subsequent response digging into the philosophy a bit.)
I do think you are on the right track with your criticism however. I agree GPT is not a general intelligence, it is a general pattern matcher. By design it produces flows of text similar to what it was trained on, without regard for how well these correspond to the actual world or even a hypothetical but rule-governed world.
The map is not the territory; GPT deals exclusively with the "map" side.
Easy, GPT-3 doesn't provide a coherent world view. Just ask it about the things it tells you, and it will quickly start to directly contradicting itself, like "A is true" and then say "A is not true". Humans don't do that so blatantly, unless they have Alzheimer's or similar which makes it hard to tell if the human is still intelligent or not.
Gpt-3 is like looking up random parts of the internet with your queries, and changing them a little will lead you to a site that tells a completely different story. That is the most reasonable explanation to why it behaves as above.
[me]
Write a five paragraph essay on Daniel Dennett's responses to Searle's Chinese room thought experiment.
[GPT (text-davinci-003)]
In his responses to John Searle's Chinese Room thought experiment, philosopher Daniel Dennett offers a defense of strong artificial intelligence (AI). In his argument, Dennett maintains that Searle's experiment does not adequately account for how human beings use language, and that strong AI can potentially be achieved if the right conditions are met.
Dennett's first response to the Chinese Room argument is that language use is not merely an informational process, as Searle suggests. Rather, Dennett argues that language use involves a complex system of meaning and interpretation, and that meaningful language use cannot be reduced to a strictly algorithmic process. For example, Dennett points out that two people may use the same words but interpret them differently, depending on various contextual factors. Thus, the Chinese Room thought experiment fails to account for the complexity of language and its use.
Dennett also challenges the assumption that a computer can never understand language as well as a human being. To this end, he argues that if a computer is given enough data, it can learn to process language in a similar way to humans. This could involve the computer being programmed with a large corpus of data, such as literature, so that it can learn to understand language in context.
Dennett's third point is that the Chinese Room argument neglects the fact that computers can be programmed to interact with their environment. In other words, a computer can be programmed to respond to various external stimuli, just as a human being would. This means that a computer could potentially interact with its environment in a meaningful way, just as a human being can.
Finally, Dennett argues that the Chinese Room argument assumes that a computer must understand language in the same way as a human being, which is an oversimplification of how language works. Rather, he suggests that a computer could potentially understand language in its own way, and be able to process information in a meaningful way.
In sum, Daniel Dennett's responses to Searle's Chinese Room thought experiment offer a defense of strong AI, arguing that computers can potentially be programmed to understand language in a meaningful way. Dennett's argument draws upon various theoretical and practical considerations, and provides an interesting counterpoint to the conventional view that computers can never truly understand language in the same way as a human being.
Yes there is a nontrivial step remaining to merge the purely pattern-matching inductive reasoning of GPT with a world-modelling deductive reasoning framework. But my assessment is the remaining steps are inevitable. The challenge that a few years ago seemed potentially insurmountable is already past. It is downhill from here.