That definition is as I said: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness."
"Highly autonomous systems" and "most economically valuable work" aren't precise enough to be useful.
"Highly" implies that there is a continuum, so where does directed end and autonomy begin?
"Most economically valuable work"... each word in that has wiggle room, not to mention that any reasonable interpretation of it is a shifting goalpost as the work done by humans over history has shifted a great deal.
The point is that none of this is defined in a way so that people can agree that something has AGI/ASI/etc. or not. If people can't agree then there's no point in talking about it.
EDIT: interestingly, the OpenAI definition of AGI specifically means that a subset of humans do not have AGI.
I think you can say if human engineers still exist, it's hard to claim we have AGI. If human engineers have been entirely replaced, then it's hard to claim we don't have AGI.
No, independently of OpenAI's definition. If we have AGI there's no reason we'd need to have humans working jobs that only involve typing stuff into a computer and going to meetings all day*. And if all those jobs are eliminated, I guess we'll have bigger problems than to debate whether we've achieved AGI or not.
* Which is a much larger class of jobs than just engineering. And also excludes field engineers and other types of engineers that need a physical body for interacting with customers, etc.**
** Though even then, you could in theory divvy up the engineering part and the customer interaction part of the job, where the human that's doing the interaction part is primarily a proxy to the engineering agent that's in his earbud.
> there's no reason we'd need to have humans working jobs that only involve typing stuff into a computer and going to meetings all day
I'm not sure I understand, and want to check. That really applies to a lot of jobs. That's all admins, accountants, programmers, probably includes lawyers, and probably includes all C-suite execs. It's harder for me to think of jobs that don't fit under this umbrella. I can think of some, of course[0], but this is a crazy amount of replacement with a wide set of skills.
But I also think that's a bad line to draw. Many of those jobs include a lot more than just typing into a computer. By your criteria we'd also be replacing most scientists, as so many are not doing physical experiments and using the computer to read the work of peers and develop new models. But also does get definition intended to exclude jobs where the computer just isn't the most convenient interface? We should be including more in that case since we can then make the connection for that interface.
I think we need a much more refined definition. I don't like the broad strokes "is computer". Nor do I like skills based definitions. They're much easier to measure but easily hackable. I think we should try to define more by our actual understanding of what intelligence is. While we don't have a precise definition we have some pretty good answers already. I know people act like the lack of an exact definition is the same as having no definition but that's a crazy framing. If we had that requirement we wouldn't have any definitions as we know nothing with infinite precision. Even physics is just an approximation, but it's about the convergence to the truth [1]
[side note] the conventional way to do references or notes here is with brackets like I did. So you don't have to escape your asterisks. *Also* if it lead a paragraph with two spaces you get verbatim text
[0] farmer, construction worker, plumber, machinist, welder, teacher, doctors, etc
Actually it occurs to me that even if we did have AGI, or even if ASI, heck if ASI even moreso, we'd still need desk jobs to maintain the guardrails.
Intelligence is one thing, being able to figure out how get a task done (say). But understanding that no, I don't want you to exploit a backdoor or blackmail my teammate or launch a warhead even though that might expedite the task. Or why some task is more important than another. Or that solving the P=NP problem is more fulfilling than computing the trillionth digit of pi. That's perhaps a different thing entirely, completely disjoint with intelligence.
And by that definition, maybe we are in the neighborhood of AGI already. The things can already accomplish many challenging tasks more reliably than most humans. But the lack of wisdom, emotion, human alignment, or whatever we want to call it, lead it to accomplish the wrong tasks, or accomplish them in the wrong way, or overlook obvious implicit requirements, may cause people to view it as unintelligent, even if intelligence is not the issue.
And that may be an unsolvable problem because AI simply isn't a living being, much less human. It doesn't have goals or ambitions or want a better future for its children. But it doesn't mean we can never achieve AGI.
Oh, and to your first question, yes it's a huge number of jobs, maybe half of jobs in developed nations. And why not? If you can get AI to do the work of the scientist for a tenth of the price, just give it a general role description and budget and let it rip, with the expectation that it'll identify the most promising experiments, process the results, decide what could use further investigation, look for market trends, grow the operation accordingly, that's all you need from a human scientist too. Plausibly the same for executives and other roles. Of course maybe sometimes the role needs a human face for press conferences or whatever, and I don't know how AI would be able to take that, but especially for jobs that are entirely internal-facing, it seems like there's no particular need for a human. Except that maybe, given the above, yes, you still need a human at the helm.
> we'd still need desk jobs to maintain the guardrails.
Agreed. I don't get why people think it is a good idea not to. I'd wager even the AGI would agree. The reason is quite simple: different perspectives help. Really for mission critical things it makes sense to have multiple entities verifying one another. For nuclear launches there's a chain of responsibility and famously those launching have two distinct keys that must be activated simultaneously. Though what people don't realize is that there's a chain of people who act and act independently during this process. It isn't just the president deciding to nuke a location and everyone else carrying out the commands mindlessly. But in far lower stakes settings... we have code review. Or a common saying in physical engineering as well among many tradesmen "measure twice, cut once".
It would be absolutely bonkers to just hand over absolute control of any system to a machine before substantial verification. These vetting processes are in place for a reason. They can be annoying because they slow things down, but they're there because they speed things up in the long run. Because their existence tends to make things less sloppy, so they are less needed. But their existence also catches mistakes that were they made slow down processes far more than all the QA annoyances and slowdowns could ever cause combined.
> And why not? If you can get AI to do the work of the scientist for a tenth of the price
And what are the assumptions being made here? Equal quality work? To my question, this is part of the implication. Price is an incredibly naive metric. We use it because we need something, but a grave mistake is to interpret some metric as more meaningful than it actually is. Goodhart's Law? Or just look at any bureaucracy. I think we need to be more refined than "price". It's going to be god awfully hard to even define what "equal quality" means. But it seems like you're recognizing that given your other statements.
And "maintaining guardrails" may be far more grandiose than it sounds. It's like if we have this energy source that could destroy the planet, but the closer you get to it without going past some threshold, the energy you get from it is proportional to the inverse of how close you are to it. There's some wiggle room and you can poke and prod and recover if it starts to go ballistic, but your goal is to extract as much energy (or wealth or whatever) out of it as possible. Every company in the world, every engineer on the planet would be pushing to extract just a little bit more without going beyond the limit.
AI could go the same way. It's a creation engine like nothing that's ever been seen before, but it can also become a destruction engine in ways that we could never understand or hope to counter, and left unchecked, the odds of that soar to near certainty. So the first job is to place dummy guardrails around it. That's where we are now. But soon that becomes too restrictive. What can we loosen? How do we know? How can we recover if we're wrong? We're not quite there yet, but we're not not there either.
Of course eventually somebody is going to trigger it and it's going to go ballistic. Our only hope is that it happens at exactly the right time where AGI can cause enough damage for people to notice, but not enough to be irrecoverable. Maybe we should rename this whole AGI thing to Project Icarus.
The reason AGI couldn't do these is the lack of a suitable interface to the physical world. It would take a trivial amount of effort for these to be designed and built by the AGI. Humans could be cut from the loop after an initial production run made up of just the subset of these physical interface devices needed to build more advanced ones.
It's a definition based on practical results. That's a good definition, because it doesn't require we already know the exact implementation. It doesn't require guessing, in a literal "put your money where your mouth is" way.
If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.
Defining capabilities based on outcome rather than implementation should be very familiar to an engineer, of any kind, because that's how every unsolved implementation must start.
> If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.
I don't buy that.
By your definition every machine has a type of general intelligence. Not just a bog standard calculator, but also my broom. It doesn't matter if you slap "smart" on the side, I'm not going to call my washing machine "intelligent". Especially considering it's over a decade old.
I don't think these definitions make anything any clearer. If anything, they make them less. They equate humans to mindless automata. They create AGI by sly definition and let the proposer declare success arbitrarily.
Sorry, I assumed the comment was clear, with your comment above. Here's what I meant:
> By your definition every machine has a type of general intelligence. Not just a bog standard calculator, but also my broom.
I really don't know of any human that can out perform a standard calculator at calculations. I'm sure there are humans that can beat them in some cases, but clearly the calculator is a better generalized numeric calculation machine. A task that used to represent a significant amount of economic activity. I assumed this was rather common knowledge given it featuring in multiple hit motion pictures[0].
> General: 1. affecting or concerning all or most people, places, or things; widespread.
To me, a general intelligence is one that is not just specific: it's affecting or concerning all or most areas of intelligence.
A calculator is a very specific, very inflexible, type of intelligence, so it's not, by definition, general. And, I'm not talking about the indirect applications of a calculator or a specific intelligence.
If you want to argue that we don't need the concept of AGI, because something like specific experts could be enough to drastically change the economy, then sure! That would be true. But I think that's a slightly different, complimentary, conversation. Even then, say we have all these experts, then a system to intelligently dispatch problems to them...maybe that's a specific implementation of AGI that would work. I think how less dependent on human intelligence the economy becomes, and how more dependent on non-human decision makers it becomes, is a reasonable measure. This seems controversial, which I can't really understand. I'm in hardware engineering, so maybe I have a different perspective, but goals based on outcome are the only ones that actually matter, especially if nobody has done it before.
> To me, a general intelligence is one that is not just specific
Which is why a calculator is a great example.
> A calculator is a very specific, very inflexible, type of intelligence, so it's not, by definition, general
Depends what kind of calculator and what you mean. I think they are far more flexible than you give them credit for.
> I'm in hardware engineering, so maybe I have a different perspective, but goals based on outcome are the only ones that actually matter, especially if nobody has done it before.
Well if we're going to talk about theoretical things, why dismiss the people who do theory? There's a lot of misunderstandings when it comes to the people that create the foundations everyone is standing on.
> And, I'm not talking about the indirect applications of a calculator or a specific intelligence.
This was an attempt to prevent this exact chain of response.
I calculator can only be used indirectly to solve a practical problem. A more general intelligence is required to know a calculator is needed, and how to break down the problem in a way that a calculator can be used effectively.
For example, you can't solve any real world problem with a calculator, beyond holding some papers down, or maybe keeping a door open. But, an engineer (or other general intelligence) with a calculator can solve real world problems with it. Tools vs tool users. The user is the general bit, not the specific tool that's useless on its own!
I think we've reached the limits of communication. Cheers!
I can definitely write programs to solve real world problems. I think you're just so set on your answer that you're not recognizing the flexibility that exists. Your argument isn't so different from all the ones I hear that argue that science is useless and that it's engineers who do everything. It has the exact same form and smell albeit with different words. But as generalists we understand abstraction, right?
yes you, a general intelligence, write a specific expert for a specific problem.
> If you want to argue that we don't need the concept of AGI, because something like specific experts could be enough to drastically change the economy, then sure! That would be true. But I think that's a slightly different, complimentary, conversation. Even then, say we have all these experts, then a system to intelligently dispatch problems to them...maybe that's a specific implementation of AGI that would work.
was to prevent this comment chain.
I think we'll have to give up at this point, with whatever smells you may be attempting to communicate with me. Cheers!
I'm pretty sure they were asking for a pinned date for definitions of "economically valuable" and "most (of total economic value)", specifically because, as previous comments noted, the definition and quantity of "economic value" vary over time. If AI hype is to be believed, and if we assume AGI has a slow takeoff, the economy will look very different in 2030, significantly shifting the goalposts for AGI relative to the same definition as of 2026.
Well if humans can do economically valuable mental work the AI can't then its not AGI, don't you think? An AGI could learn that new job too and replace the human, so as long as we still have economically valuable mental work that only humans can do then we haven't reached AGI.
This is a strange binary I don't understand. There are humans that can't do the work of some humans. Intelligence is, clearly, a spectrum. I don't see why a general intelligence would need to have capabilities far beyond a human, when just replacing somewhat lacking humans could upend large portions of the economy. Again, "it's not AGI" arguments will eventually require that some humans aren't considered intelligent, which is the point in time that we'll all be able to agree "ok, this is AGI".
As catlifeonmars noted, what's valuable changes over time.
But beyond that, part of the nature of that change over time is that things tend to be valuable because they're scarce.
So the definition from upthread becomes roughly "highly autonomous systems that outperform humans at [useful things where the ability to do those things is scarce]", or alternatively "highly autonomous systems that outperform humans at [useful things that can't be automated]".
Which only makes sense if the reflexive (it's dependent on the thing being observed) part that I'm substituting in brackets is pinned to a specific as-of date. Because if it's floating / references the current date that that definition is being evaluated for, the definition is nonsensical.
I'd argue it's so vague it's already nonsensical. Can we not declare Google (search) AGI? It sure does a hell of a lot of stuff better than any human I now. Same with the calculator in my desk drawer. Even by broom does a far better job sweeping than I do. My hands just aren't made for sweeping.
But to extend your point, I think we really need to be explicit about the assumptions being made. Everyone loves to say intelligence is easy to define but if it were then we'd have a definition. But if "you" figure it out and it's so simple then "we" are all too dumb and it needs better explaining for our poor simple minds. Or there's a lot of details that make it hard to pin down and that's why there's not a definition of it yet. Kinda like how there's no formal definition of life
I think you're conflating "knowledge" with "intelligence". And, "agency" seems to be a missing concept, which is the only way for something intelligent to apply its knowledge to achieve something practical, on its own.
Google search can't achieve anything practical, because it has no agency. It has no agency partly because it doesn't have the required intelligence to do anything on its own, other than display results for something else, that does have agency, to use.
The applicable definitions, from the dictionary:
Knowledge: facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject.
Intelligence: the ability to acquire and apply knowledge and skills.
Agency: the ability to make decisions and act independently.
Sorry, what do current LLM architectures have to do with this? It should be extremely clear to you that current LLM don't fit this definition. If they did, we wouldn't be having this conversation!!!!
Nobody is talking about current LLM. And, we don't know if gradient decent will be involved, since the systems don't exist yet. Maybe it all be some runtime gradient decent. If there's an optimization problem involved in any part of the process, and numbers are involved, there's a near guarantee it will include gradient decent.
"Highly autonomous systems" and "most economically valuable work" aren't precise enough to be useful.
"Highly" implies that there is a continuum, so where does directed end and autonomy begin?
"Most economically valuable work"... each word in that has wiggle room, not to mention that any reasonable interpretation of it is a shifting goalpost as the work done by humans over history has shifted a great deal.
The point is that none of this is defined in a way so that people can agree that something has AGI/ASI/etc. or not. If people can't agree then there's no point in talking about it.
EDIT: interestingly, the OpenAI definition of AGI specifically means that a subset of humans do not have AGI.