Isn't this a bad deal? Or is there an error in my math?
For $40, I'd get 20 tok/s * 2.6M seconds per month = 52M tokens of DeepSeek v3.2 per month if I run it 24/7, which is not realistic for most workloads.
On OpenRouter [1], $40 buys 105M tokens from the same model, which is more than 52M tokens, and I can freely choose when to use them.
There’s something called the Gell-Mann amnesia effect where people often see what you have but then go back to assuming the other stories are all reliable.
I used to love Private Eye and they have done great journalism that’s highly acclaimed, but the only thing they wrote that I really knew about (literally the office I was in) was outrageously wrong and would have been so easy to verify (ask literally anyone in the BBC building we were in to go to that floor, or take a tour or write an email). Can’t read it any more.
Here's Wikipedia's entry on the Gell-Mann Amnesia Effect, because I've found it a very useful concept to know. Despite my media experiences, I still keep falling for it. And I love that we're still referring to it as Gell-Mann Amnesia here:
In a speech in 2002, Crichton coined the term "Gell-Mann amnesia effect" to describe the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible. He explained that he had chosen the name ironically, because he had once discussed the effect with physicist Murray Gell-Mann, "and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have".
The results absolutely are interesting - in fact they’re far stronger for the willingness of many to inflict violence than the original description suggested.
> While every obedient participant reliably pressed the shock lever, they regularly neglected or ruined the other steps required to justify the shock.
Procedural violations here include things like asking the question while the person in the other room was still screaming.
I'm glad AI curmudgeonry on HN has shifted from "it doesn't work, scam, they made the deployed model worse with 0 communication" to something more akin to "why does anyone use mac or windows, nix is peak personal computing"
I don’t know why people feel the need for such revisionism but AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.
> AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.
When I was 13, having just started programming, I picked up a book from a "junk bin" at a book store on Artificial Intelligence. It must have been from the mid-80s if not older.
It had an entire chapter on syllogism[1] and how to implement a program to spit them out based on user input. As I recall it basically amounted to some string exteaction assuming user followed a template and string concatenation to generate the result. I distinctly recall not being impressed about such a trivial thing being part of a book on AI.
Most people do, most people don’t have wildly different setups do they? I’d bet there’s a lot in common between how you write code and how your coworkers do.
The benefit of digital things is that they can be copied much more cheaply than physical things. There’s perhaps migrations and upkeep though.
On the technical side perhaps the shared nature of this helps - if you can have something replicated so that you and several other members are all running replicas there’s a
On the non technical side, take some photos and print them on good paper. Print out stories on paper.
That doesn’t cover video and perhaps other things but it’s simple and does actually work for lots and lots of stories and pictures. It’s also immediately doable right now without anything new.
No, the big thing with AGI was that it was general. AI things we made were extremely narrow, identify things out of a set of classes or route planning or something similarly specific. We couldn't just hand the systems a new kind of task, often even extremely similar ones. We've been making superhuman level narrow AI things for many years, but for a long time even extremely basic and restricted worlds still were beyond what more general systems could do.
If LLMs are your first foray into what AI means and you were used to the term ML for everything else I could see how you'd think that, but AI for decades has referred to even very simple systems.
If AGI doesn't mean human level then what does? As you say, every application of A* is in some way "AI" so we had this idea of "AGI" for something "actually intelligent" but maybe I'm wrong and AGI never meant that. What does mean that?
reply