Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the ideal case, an AGI makes solve-able problems that are either impossible, or will take a very long time, for us to solve. There are a lot of problems left to solve, or at least that a lot of people would like to solve, and who knows what new ones will come. If AGI lets them be solved a lot sooner, then there's a strong motivation to build it.

Terminator is of course fiction, but AGI being more agent-y than tool-y suggests we ought to be very careful in trying to design it so that its interests align with (even if not perfectly matching) our own interests. There are lots of reasons to be pessimistic about this at the moment, from outright control as you say being laughably unlikely, to the slow state of progress on formal alignment problems that e.g. MIRI has been working on for many years relative to the recent relatively fast progress in non-G AI capabilities that may help make AGI come sooner.



Even unaligned tool AGIs of a certain level become very very dangerous, becoming genies that give you what you asked and not what you wanted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: