Why would we expect them to become super human? I would expect AIs to be able to use the latest technology and weapons, and also to develop new and better technology and weapons, and to exclude other intelligences from using said technology and weapons, but note that this is already true for humans.
I don’t really buy that. Humans also improve their own abilities using technology, and I don’t see any reason to expect that technological advancements made by AIs won’t be available to humans as well. Yes, an AI group that is hostile to a human group may want to develop technology and keep it to themselves, but again, that’s already the case with different human groups (and tends to apply most prominently to our most destructive technologies).
There's an important difference: AI will be constructed using our engineering methods, not nature's.
The way humans improve their mental abilities is quite inefficient. You can boil it down to three main methods:
- Altering the chemical balance of our bodies. Exercise, diet, drugs. In its precision and scope of effect, it's not that different from beating a machine with a hammer until it improves. There's so much you can do this way, because the brain is a highly optimized system, and a part of a highly optimized system of the body. Change any parameter at random, and you're likely to make things worse[0].
- Learning. I.e. dumping information and doing repetitive rain dances, until the brain picks on the pattern we're trying to internalize.
- Outsourcing. Building external tools for thought. This is speaking, writing, language, notations, abstractions; this is TODO lists and schedules and spreadsheets; it's also listening and reading and society - because our biggest "second brain" is other people. That last trick is what let us dominate this planet.
Now take an AI constructed in silico. If it reaches close to human-level of cognition, it can already do learning and outsourcing (sans society, initially). But what it can also do is:
- Precision hardware improvement. If it's running on anything that came out of human factory, that hardware can be redesigned, improved directly, at component level. Unlike with human brain, there are people (or later, AIs) that understand how the substrate work. The factory itself can be improved too, to create even better hardware.
- Precision software improvement. Even if the AI was made accidentally, from some completely opaque ML model, by definition we know much more about even the blackest of our algorithmic boxes than we know about our brain. Core algorithms can be optimized, improved. More software constructs can be added at the IO boundaries.
Imagine how more effective you'd be if, on top of all that you are and do, you could put your TODO list, calendar and scientific calculator in your head, as well as store verbatim every book you've read, in a searchable format. Humans can't do that, we have to keep these things external and RPC through our eyes and hands. An IQ 100 human-level AI could easily make these things run in itself, or on a co-processor, with direct interface to itself - the equivalent of gaining new senses. Going by human standards, this could easily give it a boost to apparent IQ 200.
And then it could do it again, and again, and again, compounding its capabilities at every step. That's the "sudden takeoff" people are worried about.
> I don’t see any reason to expect that technological advancements made by AIs won’t be available to humans as well
They may be available, but they won't be useful. We'll always be second-hand citizens (until we figure out BCI), because the AI will be able to plug the technology directly to itself, while we'll have to interface with it through our senses and bodies. It's a difference between a process running a subprocess on the same machine with local IPC, vs. running it over a network on a machine on another side of the planet, via a very low-bandwidth API. Performance differs by many orders of magnitude.
You seem to be making the assumption that we will be able to create general AI significantly before we (or the general AI) will invent general human brain-computer interfaces. While that might be possible, it seems unlikely to me.
I would say we have been dealing with goal alignment problems with humans for most of human history.