Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am happy to see discussion of the them here though I'm not in complete agreement with the article.

One interesting quote: "The question of “Alignment” then is an Absurd question - as humanity cannot align internally due to our structural impediments"

I agree in the sense that I think "alignment" is a poor conceptualization of the problem of avoiding destructive AGIs. But I don't think the conclusion here is you don't have to worry about destructive AGIs. I mean, if an AGIs could be constructed by present day "training" - feeding large swaths of data so it had average qualities of human but it's behavior was underpredictable, then it's change of acting with awful malice would be quite high. Because humans often act that way, social/societal are the primary limits to this and an entity of this sort might effectively not have those limits.



> But I don't think the conclusion here is you don't have to worry about destructive AGIs.

In fact my argument is precisely that you should worry because there is no possibility of anything but destructive or Nerfed, AGIs if we continue on the current data generation path




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: