Elon called it when this was announced. Sam never had the money. Interesting to contrast those two characters.
I think Sam is somewhat sociopathic, smooth salesman, not very technical, will say whatever needs to be said to get what he wants.
Elon is on the spectrum and has bad social judgement and is just immature in a lot of ways, is very direct and means what he says when he says it, even if it's often unrealistic or misguided. Is extremely technical, and honestly I think has better intentions, just gets in his own way a lot.
Dario is an odd duck but seems stable and good intentions, very technical (I think?).
Hasib, wow what a normal, likable guy, extremely technical.
Zuckerberg seems to finally be entering the chat in terms of big AI models.
I think Altman scares me the most, in terms of having control of this tech. Hasib probably seems the best to control it. Just in terms of if I had to pick one.
I’m talking about intrinsic motivation, regardless of if you agree/disagree with the politics. Altman wants power for the sake of power. He’s not technical. I think Musk’s power seeking is to achieve technical goals.
They're also still posting on LinkedIn, Instagram, TikTok, Facebook, and YouTube (in addition to BlueSky and Mastodon). It's silly to suggest that anything outside of X is an echo chamber, or that one must communicate on a platform dominated by white supremacists to expose your ideas to a diverse audience.
The New York Times has some great journalists and does important work, but they certainly have an editorial bias/agenda on most topics, even though it's often subtle. People claiming they are neutral on most topics are just not seeing it, because it aligns with their slant.
Just saying it's interesting to see when folks notice it or not on hackernews.
Ridiculous take. Israel wants a secular Iran, not a failed state. Most Iranians don’t hate Israel. They hate the Islamic regime. But westerners just looove to support all Iranian proxies these days.
Israel does want a failed state - they want to balkanise Iran, letting their ISIS head-choppers, the Uigher terrorists etc at it. See Syria and how that's going.
And every time it works, they still don't acknowledge it. Would he have blown up bridges and power plants? Quite possibly. Would he have dropped a nuke? Obviously not.
With Anthropic able to use this model internally (since February), is this the kickoff of ramping up the flywheel of recursive self improvement of AI?
It seems like as long as there are still humans in the loop at most steps, exponential recursion isn’t possible.
reply