People keep talking about AGI as if it's some mystical leap beyond human capability.
But let's be honest; software development at a modern startup is already the upper bound of applied intelligence. You're juggling shifting product specs, ambiguous user feedback, legacy code written by interns, and five competing JS frameworks, all while shipping on a Friday.
Models can now do that. They can reason about asynchronous state, refactor a codebase across thousands of lines, and actually explain the difference between useEffect and useLayoutEffect without resorting to superstition.
If that's not general intelligence, what exactly are we waiting for - self-awareness?
Perhaps we're overestimating human intelligence and underestimating animal intelligence. Also funny that current LLMs are incapable of continual learning themselves.
Lol, software development at a modern startup isn't even in the upper half of applied intelligence in software engineering much less global human activity/achievement. The "problems" most startups are solving are simple to the point of banality.
Computers being good/fast at automating/calculating things that people find difficult is not a new phenomenon. By your standards we have had general intelligence decades ago.
But let's be honest; software development at a modern startup is already the upper bound of applied intelligence. You're juggling shifting product specs, ambiguous user feedback, legacy code written by interns, and five competing JS frameworks, all while shipping on a Friday. Models can now do that. They can reason about asynchronous state, refactor a codebase across thousands of lines, and actually explain the difference between useEffect and useLayoutEffect without resorting to superstition.
If that's not general intelligence, what exactly are we waiting for - self-awareness?