See, that's the thing. This COULD happen.
I do not think it will, but the piece is written not to say "this is 100% not gonna happen," but "I am unsure how it's possible, and if it does not happen, the consequences could be dire."
Hypotheses and hypotheticals are useful tools when writing about something big and messy. Instead of me saying - as I have before - that I believe generative AI is a complete dead end and thus OpenAI is in a really bad way - I took great pains to explain the terms under which they WOULD succeed - how difficult success might be, how much money it would take and how many factors would have to go their way.
If OpenAI pulls it off, it'd be really remarkable. Truly historic! But if they don't, they are in deep, deep doo doo.
As of today all of the evidence indicates the LLM paradigm is saturated.
Why all the hand-wringing about hypotheticals? As of today this stuff is a failed experiment.
Altman or Amodei coming up with the goods is a tail X-risk.