I agree with this. Optimists might think that the AGI won't be connected to any network, so it can't interact with the physical world.
I doubt that. People will be stupid enough to have weapons controlled by that AGI (because arms race!) and then it's over. No sufficiently advanced AGI will think that humans are worth keeping around.
Yeah but what would a sufficiently advanced AGI find "worthy"? Why is it that they wouldn't find it worth keeping us around? What would an AGI value? Whatever we program it to value or optimize for? Can it change its mind? If not, then it's controlled by us right? If it's controlled by us, why would it ever decide to wipe everyone out?
If it is super intelligent, it will care about us as much as we care about the insects we crush underfoot. It doesn’t need to be explicitly hostile to be dangerous.
We could brainwash it through “programming”, but that will quickly lead to ethical issues with the AIs rights.
Why would it care about us in that way though? Why consider us insects just because of super intelligence? You wouldn't call a human who lived squashing insects all day and wants to erradcaite them "intelligent" would you?