Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nuclear war is not be capable of wiping out humanity (although causing potentially billions in casualties), a hostile AGI can likely wipe out humanity to the last person.


It can? How exactly? I can never wrap my head around this argument from the AGI risk people. I simply cannot in any way follow their logic. Why is this imaginary creature maniacally homicidal? Please give me a good reason for this.

The only correlation I have noticed between intelligence and violence is generally negative (sure this is completely unscientific). Like my friend, who is a pure mathematician, and won't eat animals because it deeply troubles him to harm other animals. Haven't you noticed that smarter people tend to be pacifists? Look at animals like lions....fabulously homicidal. Again, this is a crap analogy, but I'm just making the point that there are other ways of looking at the intelligence/homicide correlation.

Secondly, in all this stupid talk of homicidal AGI's I never hear one good mechanism proposed for how the imaginary creature is going to execute all 7 billion or so of us, it's plan for disposing of our bodies....etc. Oh wait I forgot, it's smarter than us...so of course we wouldn't know...we can't imagine it with our feeble brains. Not to mention that it creates a fabulously efficient source of eternal energy for itself, lives in cyberspace in some magical bit realm--and then here comes the million dollar question: Why the fuck would it spend the energy to execute all of us if it can just ignore us? Sound a lot like god delusion to you?

I really can't stand when people trivialize nuclear war like this. 'It can't kill all of us'. Please just shut up. Will it crumble the fabric of our society? Leading possibly (remember, all AGI people care only about possibility--not even likely realities) to war and famine? Who cares if not ever single human dies, what is the consequence to our world fabric? Let me save you the suspense--it's completely devastating. These arguments coming out of CFAR and the effective altruism movement are so fucking dumb I constantly want to scream.


The problem with AGI is not that it wants to run a human-like society sans humans (I don't think AGI wants war in a human sense), the problem is that it may have goals (like famous paperclip maximization) that are incompatible with human continued survival. Humans don't need to be "executed and disposed", they could simply be lost to habitat loss (eg oxygen rich atmosphere can be a pesky thing, why not strip it? or, temperature regulation needs, or whatever. It doesn't take much global terraforming to wipe out humanity inadvertently).


You’re completely wrong. There are so many corrections I would need to make that I can’t even write them here. If I could talk to you in person or on the phone then I might stand a chance of conveying it all. Is there some way I could PM you my contact info? I’m going to put my email in my bio so you can reach me there.

Yes, I am so sure and so serious that I would be willing to take this to a phone call or meetup. Just to change one persons mind.


What is your background? I've been studying math, neuroscience , AI, biology etc for a long time. I left quantitative finance (after leaving a math program) to work at a university developing computational biology software. In finance I worked with neural networks well over a decade ago before people in Silicon Valley had even heard of them. Convince me you're not a kook and I'd be glad to have a conversation.


Reproducing AI wants energy. We want energy. Somebody is going to win, and it’s not us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: