Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> An autonomous agent let loose with all the tools in the world will in essence lead to an outcome which is not predicted to be in our favour.

Given the current state of the art, it won't lead to any worse outcomes than a somewhat mentally impaired human "let loose on the world". And since training scales quadratically (doubling model size means double the amount of data is needed to train it optimally), and we're already at the limits of current hardware, that's unlikely to change much any time soon.



Just to be clear, I think LLMs have enormous potential and am focused on building products with them. But I also believe that smarter hyperspeed LLMs will be an existential risk when widely deployed in the relatively near future.

In a way GPT-4 is like a mentally impaired human but in other ways it's superhuman. It operates faster than a human in many contexts. It has vastly greater knowledge. Agents based on LLMs could communicate and process new information practically instantaneously.

And the important point that people are in denial about is that GPT-4 reasons effectively. It's far from perfect and has some strange failure modes, but demonstrates the potential of these systems.

It's not accurate to think that LLM performance can't be improved without doubling size. There are many approaches to efficiency recently demonstrated that don't require larger datasets.

We now have many geniuses with billions and billions behind them pushing hard to optimize this specific application from all directions. Modifying the software that runs the model, the model parameters, model architecture, and the hardware. In particular for hardware there is now a large increase in attention to novel compute-in-memory paradigms or techniques.

GPT-X will have at least 33% higher IQ and 50-100 times faster output within no more than 5-10 years. Quite possibly less than that. Humans will not be able to compete with that. The only option will be to deploy their own AI agents.

And there will be a strong incentive to increase the level of autonomy for the agents, since making them wait for human input a few hours means that the competitors' agents race ahead doing the equivalent of days of work in that time frame.

This delegation of control to agents with superior reasoning ability sets the stage for real danger. Especially in a military or industrial context.


A steady increase in LLM power sounds like bad news for artists and other professions who can easily be replaced by them.

The internet will fill up with bots even more than it already is. Maybe it will be nicer, because smarter bots would be more fun to talk to.


Except for security culture. We distrust humans but allow apps to have keys to the kingdom DB connections and so on. Allowing an LLM an outbound internet connection is dangerous. Maybe a firewalled one might be OK.


Imagine what a group of thousands of somewhat mentally impaired humans, each of which has its own role to play in a perfectly orchestrated adversarial ensemble, could do.

The fact that hacker news can read this—which is what I’ve been afraid of since a few weeks into DALLE2–and still not see the coming wave, is more than a bit funny to me.

Also scary.

Mostly scary…


Mentally impaired humans sounds like the US Congress, so we are living it in a way.

So imagine that world but with garbage coming out 100x faster as it can tailor political talking points down to a fractional percentage point of voters. Oh your car was stolen by a minority? Here’s an ad that mentions car thefts, going beyond the prior ad of just generic minority-hate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: