Both sides of the rift care a great deal about AI Safety. Sam himself helped draft the OpenAI charter and structure its governance which focuses on AI Safety and benefits to humanity. The main reason of the disagreement is the different approaches they deem best:
* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.
We know that there are quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. Thus, the widespread availability of unmonitored AGI would be quite troublesome.
* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:
But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did.
Sam's Quote from the APEC Summit:
"4 times now in the history of OpenAI — the most recent time was just in the last couple of weeks — I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward"
-- https://twitter.com/SpencerKSchiff/status/172564613068224524...
You can align to general, broad principles that apply to all of humanity. Yes, of course, there are going to be exceptions and disagreements over what that looks like, but I would wager the vast majority of humanity would prefer it if a hypothetically empowered AI did not consider "wiping out humanity" as a potential solution to a problem it encounters.
Are our principles aligned with other beings in mind? Other Humans, animals, etc.?
At this point we have a bunch of rules and principles that have exceptions if someones benefit from violating them outweighs the negatives.
I don't see how we find a unified set of rules, that are clear enough and are not exploited or loopholed around in some context.
If the AI was actually smart und you tell it to not harm or kill any human beings, but we ourselves do just that every day. What is this smart AI going to do about that?
It's like parents that don't practice what they preach, but expect their kids to behave.
the article on how ai could be harmful seems more like people trying to use the tool (LLM) to do evil things. You can also do evil things with other tools than AI.
* Sam and Greg appear to believe OpenAI should move toward AGI as fast as possible because the longer they wait, the more likely it would lead to the proliferation of powerful AGI systems due to GPU overhang. Why? With more computational power at one's dispense, it's easier to find an algorithm, even a suboptimal one, to train an AGI.
As a glimpse on how an AI can be harmful, this report explores how LLMs can be used to aid in Large-Scale Biological Attacks https://www.rand.org/pubs/research_reports/RRA2977-1.html?
What if dozens other groups become armed with means to perform such an attack like this? https://en.wikipedia.org/wiki/Tokyo_subway_sarin_attack
We know that there are quite a few malicious human groups who would use any means necessary to destroy another group, even at a serious cost to themselves. Thus, the widespread availability of unmonitored AGI would be quite troublesome.
* Helen and Ilya might believe it's better to slow down AGI development until we find technical means to deeply align an AGI with humanity first. This July, OpenAI started the Superalignment team with Ilya as a co-lead:
https://openai.com/blog/introducing-superalignment
But no one anywhere found a good technique to ensure alignment yet and it appears OpenAI's newest internal model has a significant capability leap, which could have led Ilya to make the decision he did.
Sam's Quote from the APEC Summit: "4 times now in the history of OpenAI — the most recent time was just in the last couple of weeks — I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward" -- https://twitter.com/SpencerKSchiff/status/172564613068224524...