what if humanity's role is to create an intelligence that exceeds it and cannot be controlled? Can humans not desire to be all watched over by machines of loving grace?
More seriously, while I don't think it's a moral imperative to develop AGI, I consider it a desirable research goal in the same way we do genetic engineering - to understand more about ourselves, and possibly engineer a future with less human suffering.
The toy plane a kid throws in the air in the backyard is completely harmless. Yet nuke armed strategic bombers also exist, and the fact that they vaguely resemble a toy plane doesn't make them as harmless as a toy plane.
Yes, in fact when I interviewed at Neuralink the interviewer said Elon expected that AGI would eventually try to take over the world and he needed a robot body to fight them.
More seriously, while I don't think it's a moral imperative to develop AGI, I consider it a desirable research goal in the same way we do genetic engineering - to understand more about ourselves, and possibly engineer a future with less human suffering.