Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

what if humanity's role is to create an intelligence that exceeds it and cannot be controlled? Can humans not desire to be all watched over by machines of loving grace?

More seriously, while I don't think it's a moral imperative to develop AGI, I consider it a desirable research goal in the same way we do genetic engineering - to understand more about ourselves, and possibly engineer a future with less human suffering.



Didn't we have this same talk when Elon thought AI is suddenly going to become smart and kill us all?

Yet my industrial robot at work just gives up if stock material is few millimeters longer than is should be.


The toy plane a kid throws in the air in the backyard is completely harmless. Yet nuke armed strategic bombers also exist, and the fact that they vaguely resemble a toy plane doesn't make them as harmless as a toy plane.


Yes, in fact when I interviewed at Neuralink the interviewer said Elon expected that AGI would eventually try to take over the world and he needed a robot body to fight them.


One could argue that humanity's role this far has been to create intelligences that exceed it. Namely reproducing offspring and educating them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: