It's clearly impossible to learn how to translate Linear A into modern English using only content written in pure Japanese that never references either.
Yet also, none of the algorithms before Transformers were able to first ingest the web, then answer a random natural language question in any domain — closest was Google etc. matching on indexed keywords.
> how are AIs going to evolve past human level unless they make their own data?
Who says they can't make their own data?
Both a priori (by development of "new" mathematical and logical tautological deductions), and a posteriori by devising, and observing the results of, various experiments.
I see this brought up consistently on the topic of AI take-off/X-risk.
How does an AI language model devise an experiment and observe the results? The language model is only trained on what’s already known, I’m extremely incredulous that this language model technique can actually reason a genuinely novel hypothesis.
A LLM is a series of weights sitting in the ram of GPU cluster, it’s really just a fancy prediction function. It doesn’t have the sort of biological imperatives (a result of being complete independent beings) or entropy that drive living systems.
Moreover, if we consider how it works for humans, people have to _think_ about problems. Do we even have a model or even an idea about what “thinking” is? Meanwhile science is a looping process that mostly requires a physical element(testing/verification) to it. So unless we make some radical breakthroughs in general purpose robotics, as well as overcome the thinking problem I don’t see how AI can do some sort tech breakout/runaway.
Starting with the end so we're on the same page about framing the situation:
> I don’t see how AI can do some sort tech breakout/runaway.
I'm expecting (in the mode, but with a wide and shallow distribution) a roughly 10x increase in GDP growth, from increased automation etc., not a singularity/foom.
I think the main danger is bugs and misuse (both malicious and short-sighted).
-
> How does an AI language model devise an experiment and observe the results?
Same way as Helen Keller.
Same way scientists with normal senses do for data outside human sense organs, be that the LHC or nm/s^2 acceleration of binary stars or gravity waves (or the confusingly similarly named but very different gravitational waves).
> The language model is only trained on what’s already known, I’m extremely incredulous that this language model technique can actually reason a genuinely novel hypothesis.
Were you, or any other human, trained on things unknown?
If so, how?
> A LLM is a series of weights sitting in the ram of GPU cluster, it’s really just a fancy prediction function. It doesn’t have the sort of biological imperatives (a result of being complete independent beings) or entropy that drive living systems.
Why do you believe that biological imperatives are in any way important?
I can't see how any of a desire to eat, shag, fight, run away, or freeze up… help with either the scientific method nor pure maths.
Even the "special sauce" that humans have over other animals didn't lead to any us doing the scientific method until very recently, and most of us still don't.
> Do we even have a model or even an idea about what “thinking” is?
AFAIK, only in terms of output, not qualia or anything like that.
Does it matter if the thing a submarine does is swimming, if it gets to the destination? LLMs, for all their mistakes and their… utterly inhuman minds and transhuman training experience… can do many things which would've been considered "implausible" even in a sci-fi setting a decade ago.
> So unless we make some radical breakthroughs in general purpose robotics
I don't think it needs to be general, as labs are increasingly automated even without general robotics.
> Do we even have a model or even an idea about what “thinking” is
At the least, it is a computable function (as we don’t have any physical system that would be more general than that, though some religions might disagree). Which already puts human brains ahead of LLM systems, as we are Turing-complete, while LLMs are not, at least in their naive application (their output can be feeded to subsequent invocations and that way it can be).
I googled whether or not universal function approximators, which neural nets are considered, are also considered Turing complete. It seems the general consensus is kind of not, since they are continuous and can’t do discreet operations in the same way.
But also, that isn’t quite the whole story, since they can be arbitrarily precise in their approximation. Here[0] is a white paper addressing this issue which concludes attention networks are Turing complete.
Is it provably not turning complete? That property pops up everywhere even when not intended, like Magic: The Gathering card interactions.
Technically you may not want to call it Turing complete given the limited context window, but I'd say that's like insisting a Commodore 64 isn't Turing complete for the same reason.
Likewise the default settings may be a bit too random to be a Turing machine, but that criticism would also apply to a human.
ChatGPT does have a loop, that's why it produces more than one token.
In this context, that the possibility of running "forever" would also exclude the humans (to which it is being compared) is relevant — even if we spend all day thinking in words at the rate of 160wpm and .75 words per token, we fall asleep around every 200k tokens, and some models (not from OpenAI) exceed that in their input windows.
Yet I can solve many sudoku problems in a single wake cycle.
Also, its output is language and it can’t change a former part of speech, can only append to it. When “thinking” about what to say next, it can’t “loop” over that, only whether to append some more text to it. Its looping is strictly within a “static context”.
It's not just a series of weights. It is an unchanging series of weights. This isn't necessarily artificial intelligence. It is the intelligence of the dead.
> Yet also, none of the algorithms before Transformers were able to first ingest the web, then answer a random natural language question in any domain — closest was Google etc. matching on indexed keywords.
Wrong, recurrent models were able to do this, just not as well.
It's both.
It's clearly impossible to learn how to translate Linear A into modern English using only content written in pure Japanese that never references either.
Yet also, none of the algorithms before Transformers were able to first ingest the web, then answer a random natural language question in any domain — closest was Google etc. matching on indexed keywords.
> how are AIs going to evolve past human level unless they make their own data?
Who says they can't make their own data?
Both a priori (by development of "new" mathematical and logical tautological deductions), and a posteriori by devising, and observing the results of, various experiments.
Same as us, really.