I don’t even know what AGI is, and neither does anyone else as far as I can tell. In the parts of the video I watched, he cites several things missing which all have to do with autonomy: continual automated updates of internal state, fully autonomous agentic behavior, etc.
I feel like GPT 3 was AGI, personally. It crossed some threshold that was both real and magical, and future improvements are relying on that basic set of features at their core. Can we confidently say this is not a form of general intelligence? Just because it’s more a Chinese Room than a fully autonomous robot? We can keep moving the goalposts indefinitely, but machine intelligence will never exactly match that of humans.
It crossed some threshold that was both real and magical
Only compared to our experience at the time.
and future improvements are relying on that basic set of features at their core
Language models are inherently limited, and it's possible - likely, IMO - that the next set of qualitative leaps in machine intelligence will come from a different set of ideas entirely.
Thats not a period, it's a full stop. There is no debate to be had here.
IF an LLM makes some sort of breakthrough (and massive data collation allows for that to happen) it needs to be "re trained" to absorb its own new invention.
But we also have a large problem in our industry, where hardware evolved to make software more efficient. Not only is that not happening any more but we're making our software more complex and to some degree less efficient with every generation.
This is particularly problematic in the LLM space: every generation of "ML" on the llm side seems to be getting less efficient with compute. (Note: this isnt quite the case in all areas of ML, yolo models working on embedded compute is kind of amazing).
Compactness, efficiency and reproducibility are directions the industry needs to evolve in, if it ever hopes to be sustainable.
I think most people would consider AGI to be roughly matching that of humans in all aspects. So in that sense there’s no way that GPT3 was AGI. Of course you are free to use your own definition, I’m just reflecting what the typical view would be.
AGI is when a computer can accomplish every cognitive task a typical human can. Given tools to speak, hear, and manipulate a computer, an AGI could be dropped in as a remote employee and be successful.
I wouldn’t say that any specific skill (like literacy) is required to have intelligence. It’s more the capability to learn skills and build a model of the world and the people in it using abstract reasoning.
Otherwise we would have to say that pre-literacy societies lacked intelligence, which would be silly since they are the ones that invented writing in the first place!
I feel like GPT 3 was AGI, personally. It crossed some threshold that was both real and magical, and future improvements are relying on that basic set of features at their core. Can we confidently say this is not a form of general intelligence? Just because it’s more a Chinese Room than a fully autonomous robot? We can keep moving the goalposts indefinitely, but machine intelligence will never exactly match that of humans.