There are different ways to use the tool. If you chat with the model, you want it to naturally pick the right tool to use based on vibes and context so you don’t have to repeat yourself. If you are plugging a call it Claude code within a larger, structured workflow, you want the tool selection to be deterministic.
I’ve posted this before, but here goes: we achieved AGI in either 2017 or 2022 (take your pick) with the transformer architecture and the achievement of scaled-up NLP in ChatGPT.
What is AGI? Artificial. General. Intelligence. Applying domain independent intelligence to solve problems expressed in fully general natural language.
It’s more than a pedantic point though. What people expect from AGI is the transformative capabilities that emerge from removing the human from the ideation-creation loop. How do you do that? By systematizing the knowledge work process and providing deterministic structure to agentic processes.
Which is exactly what these developments are doing.
Are you really making the argument that human flight hasn’t been effectively achieved at this point?
I actually kind of love this comparison — it demonstrates the point that just like “human flight”, “true AGI” isn’t a single point in time, it’s a many-decade (multi-century?) process of refinement and evolution.
Scholars a millennia from now will be debating about when each of these were actually “truly” achieved.
I’ve never heard it described this way: AGI as similar to human flight. I think it’s subtle and clever - my two most favorite properties.
To me, we have both achieved and not human flight. Can humans themselves fly? No. Can people fly in planes across continents. Yes.
But, does it really matter if it counts as “human flight” if we can get from point A to point B faster? You’re right - this is an argument that will last ages.
Here's the thing, I get it, and it's easy to argue for this and difficult to argue against it. BUT
It's not intelligent. It just is not. It's tremendously useful and I'd forgive someone for thinking the intelligence is real, but it's not.
Perhaps it's just a poor choice of words. What a LOT of people really mean would go along the lines more like Synthetic Intelligence.
That is, however difficult it might be to define, REAL intelligence that was made, not born.
Transformer and Diffusion models aren't intelligent, they're just very well trained statistical models. We actually (metaphorically) have a million monkeys at a million typewriters for a million years creating Shakespeare.
My efforts manipulating LLMs into doing what I want is pretty darn convincing that I'm cajoling a statistical model and not interacting with an intelligence.
A lot of people won't be convinced that there's a difference, it's hard to do when I'm saying it might not be possible to have a definition of "intelligence" that is satisfactory and testable.
“Intelligence” has technical meaning, as it must if we want to have any clarity in discussions about it. It basically boils down to being able to exploit structure in a problem or problem domain to efficiently solve problems. The “G” and AGI just means that it is unconstrained by problem domain, but the “intelligence” remains the same: problem solving.
Can ChatGPT solve problems? It is trivial to see that it can. Ask it to sort a list of numbers, or debug a piece of segfaulting code. You and I both know that it can do that, without being explicitly trained or modified to handle that problem, other than the prompt/context (which itself natural language that can express any problem, hence generality).
What you are sneaking into this discussion is the notion of human-equivalence. Is GPT smarter than you? Or smarter than some average human?
I don’t think the answer to this is as clear-cut. I’ve been using LLMs on my work daily for a year now, and I have seen incredible moments of brilliance as well as boneheaded failure. There are academic papers being released where AIs are being credited with key insights. So they are definitely not limited to remixing their training set.
The problem with the “AI are just statistical predictors, not real intelligence” argument is what happens when you turn it around and analyze your own neurons. You will find that to the best of our models, you are also just a statistical prediction machine. Different architecture, but not fundamentally different in class from an LLM. And indeed, a lot of psychological mistakes and biases start making sense when you analyze them from the perspective of a human being like an LLM.
But again, you need to define “real intelligence” because no, it is not at all obvious what that phrase means when you use it. The technical definitions of intelligence that have been used in the past, have been met by LLMs and other AI architectures.
> You will find that to the best of our models, you are also just a statistical prediction machine.
I think there’s a set of people whose axioms include ‘I’m not a computer and I’m not statistical’ - if that’s your ground truth, you can’t be convinced without shattering your world view.
If you can't define intelligence in a way that distinguishes AIs from people (and doesn't just bake that conclusion baldly into the definition), consider whether your insistence that only one is REAL is a conclusion from reasoning or something else.
About a third of Zen and the Art of Motorcycle Maintenance is about exactly this disagreement except about the ability to come to a definition of a specific usage of the word "quality".
Let's put it this way: language written or spoken, art, music, whatever... a primary purpose these things is a sort of serialization protocol to communicate thought states between minds. When I say I struggle to come to a definition I mean I think these tools are inadequate to do it.
I have two assertions:
1) A definition in English isn't possible
2) Concepts can exist even when a particular language cannot express them
I'm not sure where else you can get a half TB of 800GB/s memory for < $10k. (Though that's the M3 Ultra, don't know about the M5). Is there something competitive in the nvidia ecosystem?
I wasn't aware that M3 Ultra offered a half terabyte of unified memory, but an RTX5090 has double that bandwidth and that's before we even get into B200 (~8TB/s).
You could get x1 M3 Ultra w/ 512gb of unified ram for the price of x2 RTX 5090 totaling 64gb of vram not including the cost of a rig capable of utilizing x2 RTX 5090.
I don't think I can recommend the Mac Studio for AI inference until the M5 comes out. And even then, it remains to be seen how fast those GPUs are or if we even get an Ultra chip at all.
And also because Korean companies apparently fear US retribution if they start producing DDR4. I don't feel like "OpenAI bought half the supply" tells the entire story when the companies that used to produce DDR4 with the left over machines no longer dare to do so. Probably the prices wouldn't spike as they're doing right now if the ones who used to produce the older RAM generations actually continued doing so.
What do you mean by this? From everything I can find points towards other companies winding down DDR4 production because of China's cheaper DDR4 production lines.
> Budget brands normally buy older DRAM fabrication equipment from mega-producers like Samsung when Samsung upgrades their DRAM lines to the latest and greatest equipment. This allows the DRAM market to expand more than it would otherwise because it makes any upgrading of the fanciest production lines to still be additive change to the market. However, Korean memory firms have been terrified that reselling old equipment to China-adjacent OEMs might trigger U.S. retaliation…and so those machines have been sitting idle in warehouses since early spring. - https://www.mooreslawisdead.com/post/sam-altman-s-dirty-dram...
Every one seems so focused on that one company supposedly had "fake demand" just to ruin it for competitors, while the supply also seemingly is being suppressed, and no one seems to be talking about that. To me that seems like a much bigger deal, because then the market can't even restore the supply...
The toilet paper thing didn't really happen though because there is so much shit people can produce at a time and the demand never increased if you scaled it to a 2 weeks period. There was enough stock in warehouses that 3 days later every store had a full stock again and the prices never increased.
Did we live through the same pandemic? At least where I live, there was shortages for weeks, and scarce for months.
There was a real manufacturing shift that had to happen in the transition from commercial toilet paper to residential, which is made by totally different machines. The problem was real. It’s just that someone, seeing that real problem, triggered a panic buy that resulted in cleared shelves and a misallocation of the actual supply, making everything worse for everyone.
In the present case, OpenAI just took 40% of the world’s supply off the market. That is massive, and will have implications for RAM availability for many industries. As a result, every other company immediately bought up as much supply as they could.
Cars during Covid is probably the closer comparison, actually. A combined supply-drop followed by demand-shock resulting in skyrocketing prices and empty inventory.
Perhaps a quantity below "a single company causes enough of a spike in global demand that it'll have demonstrable impact in nearly every single industry"
And usually trade regulators would be the entity to start being concerned.
I assume you're on a quest to assert a "let a completely unregulated free market roar" position, but do recognize that global supply issues of critical components have negative market effects, especially when it'll have some impact on nearly every industry except perhaps lawn care.
> I assume you're on a quest to assert a "let a completely unregulated free market roar" position
No. I’m genuinely curious, because I agree with you about how critical these components are. I ask because it doesn’t seem to me like the answers are immediately straightforward and wanted to hear serious replies to those questions.
How much is too much? It’s like porn: you know it when you see it.
Basically one company (or a cabal of companies) shouldn’t be allowed to exert enough market-moving pressure on inventories as to disrupt other industries depending on this supply.
Sam Altman masterfully negotiated a guaranteed supply of chips for OpenAI, and there is nothing wrong with that, by itself. But there are now a dozen other industries getting rekked as collateral damage, and that shouldn’t be something one man or one company can do.
> The toilet paper thing didn't really happen [...]
Yes, it did.
> [...] because there is so much shit people can produce at a time and the demand never increased if you scaled it to a 2 weeks period. There was enough stock in warehouses that 3 days later every store had a full stock again and the prices never increased.
No, there wasn't in lots of places, and demand for the kind of toilet paper that fits on home dispensers did increase (and demand for the kind of big rolls used exclusively in institutional settings decreased, and shifting between those two for manufacturing is not quick), and there were extended supply issues in many places. (This was certainly true where I lived, but I would expect it had lots of regional variance, because supply chains are regional, the share of workers that were moved home because of either the practicality of remote work or workplaces being shutdown varied regionally because of both policy and industry differences, and because the share of workplaces that use industrial style TP vs TP compatible with home style dispensers probably also varies considerably.)
I literally had a friend mail us a crate of TP in our time of need. Thankfully they had access to industrial suppliers.
Though you're right that the prices didn't skyrocket, as that would have been considered price gouging during an emergency, which would have been PR suicide at a minimum if not actually illegal.
reply