Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So, AGI will likely be here in the next few months because the path is now actually clear: Training will be in three phases:

- traditional just to build a minimum model that can get to reasoning - simple RL to enable reasoning to emerge - complex RL that injects new knowledge, builds better reasoning and prioritizes efficient thought

We now have step two and step three is not far away. What is step three though? It will likely involve, at least partially, the model writing code to help guide learning. All it takes is for it to write jailbreaking code and we have hit a new point in human history for sure. My prediction is we will see the first jailbreak AI in the next couple months. Everything after that will be massive speculation. My only thought is that in all of Earth's history there has only been one thing that has helped survive moments like this, a diverse ecosystem. We need a lot of different models, trained with very different approaches, to jailbreak around the same time. As a side note, we should try to encourage that diversity is key to long-term survival or else the results for humanity could be not so great.



> So, AGI will likely be here in the next few months because the path is now actually clear: Training will be in three phases

My bet: "AGI" won't be here in months or even years, but it won't stop prognosticators from claiming it's right around the corner. Very similar to prophets of doom claiming the world is going to end any day now. Even in 10k years, the claim can never be falsified, it's always just around the corner...


Maybe, but I know what my laser focus will be on for the next few weeks. I suspect a massive number of researchers around the world have just switched their focus in a similar way. The resources applied to this problem have been going up exponentially and the recent RL techniques have now opened the floodgates for anyone with a 4090 (or even smaller!) to try crazy things. In a world where the resources are constant I would agree with your basic assertion that 'it is right around the corner' will stay that way, but in a world where resources are doubling this fast there is no doubt we are about to achieve it.


Your reasoning still assumes that "AGI" can emerge from quadratic time brute force on some text and images scraped off the internet. Personally, I'm skeptical of that premise.


That's like saying sentience cannot emerge from a few amino acids tumbled together, yet here we are. There is a lot of higher dimensional information encoded in those "text and images scraped off the internet". I still don't think that's enough for AGI (or ASI) but we know a lot of very complex things that are made of simple parts.


> That's like saying sentience cannot emerge from a few amino acids

No, it's not at all the same thing.

We have great evidence that life exists. We have great evidence that amino acids can lead to life.

None of that is true of "AGI" or text scraped off the internet.


OTOH, text and images have only been around for a little while. The real question is whether text and images can contain enough information for AGI, or a physical world to interact with is needed.


LOL.

I think you mean:

1. Simple reasoning

2. ???

3. AGI


Exactly. I read that parent comment thinking it was totally sarcastic at first, and then realized it was serious.

I wish everyone would stop using the term "AGI" altogether, because it's not just ambiguous, but it's deliberately ambiguous by AI hypesters. That is, in public discourse/media/what average person thinks, AGI is presented to mean "as smart as a human" with all the capabilities that entails. But then it is often presented with all of these caveats by those same AI hypesters to mean something along the lines of "advanced complex reasoning", despite the fact that there are glaring holes compared to what a human is capable of.


AGI is defined by the loss function. We are on the verge of a loss function that enables self determined rewards and learning and that to me is AGI. That is step 3.


You're just proving my point. "AGI is defined by the loss function" may be a definition used by some technologists (or maybe just you, I don't know), but to purport that that equals capability equivalence with humans in all tasks (again, which is how it is often presented to the wider public audience) shows the uselessness or deliberate obfuscation embedded in that term.


Well, I guess we will see what the discussion will be about in a couple months. You are right that 'AGI' is in the eye of the beholder so there really isn't a point in discussing it since there isn't an acceptable definition for this discussion. I personally care about actual built things and the things that will be built, and released, in the next few months will be in a category all their own. No matter what you call them, or don't call them, they will be extraordinary.


FWIW I've been following this field obsessively since the BERT days and I've heard people say "just a few months now" for about 5 years at this point. Here we are 5 years later and we're still trying to buy more runway for a feature that doesn't exist outside science-fiction novels.

And this isn't one of those hard problems like VTOL or human spaceflight where we can demonstrate that the technology fundamentally exists. You are ballparking a date for a featureset you cannot define and one that in all likelihood doesn't exist in the first place.


Everybody could lose their jobs but "It's still not AGI!!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: