Hacker Newsnew | past | comments | ask | show | jobs | submit | mlinsey's commentslogin

You're observing that:

a) effective price-per-token is rising b) there is insufficient compute to meet the demand.

And your conclusion is that the industry is circling the drain and due to collapse?


They are different observations, I think, though the phrasing confuses it:

a) cost per successful task is rising — eg claude max allocation is functionally shrinking

b) is there enough potential cost reduction in the queue to make up the gap

c) if open models converge on a more efficient but slightly-less capable point (which has effectively happened) what is the actual moat?


Yes, cost per successful task is rising - ie, we are all paying effectively more for AI.

And yet - Anthropic is still struggling to have enough capacity to serve demand - they are virtually sold out.

And yes, are almost-as-good open models, on part with the closed models from 6 months ago (at worst), that are just a single Openrouter API call away, and yet Anthropic is still selling out. So people are paying for the premium product anyway, for whatever reason - maybe the last bit of intelligence is worth it, maybe they like the harnesses/products around the models, maybe it's a brand/enterprise sales thing.

Put aside your feelings about the AI industry and imagine we are talking about thingamajigs. Prices for thingamajigs are going up. They are still selling out about as fast (or faster) than the company selling them can build factories. There are more cost-effective competitors already in the market, but thingamajigs are selling out anyway.

Would you, looking at the thingamajig industry, conclude the "jig is almost up"? That "the returns aren’t anywhere close to what investors expect" and that the impending IPO is all some desperate hail mary to save things before the collapse?


I agree, but also the model intelligence is quite spikey. There are areas of intelligence that I don't care at all about, except as proxies for general improvement (this includes knowledge based benchmarks like Humanity's Last Exam, as well as proving math theorems etc). There are other areas of intelligence where I would gladly pay more, even 10X more, if it meant meaningful improvements: tool use, instruction following, judgement/"common sense", learning from experience, taste, etc. Some of these are seeing some progress, others seem inherent to the current LLM+chain of thought reasoning paradigm.

Common sense isn’t a language pattern. I doubt this will ever work w/ LLMs.

The models that we are paying to generate tokens are already not really just LLMs, as anyone studying language models ten years ago (or someone who describes them as "next token predictors") would understand them. Doing a bunch of reinforcement learning so that a model performs better at ssh'ing into my server and debugging my app is already realllly stretching the definition of "language pattern".

I think when we do get AI that can perform as well as a human at functionally all tasks, they will be multi-paradigm systems; some components will not resemble anything in any commercial system today, but one component will be recognizably LLM-like, and act as an essential communication layer.


Different users do seem to be encountering problems or not based on their behavior, but for a rapidly-evolving tool with new and unclear footguns, I wouldn't characterize that as user error.

For example, I don't pull in tons of third-party skills, preferring to have a small list of ones I write and update myself, but it's not at all obvious to me that pulling in a big list of third-party skills (like I know a lot of people do with superpowers, gstack, etc...) would cause quota or cache miss issues, and if that's causing problems, I'd call that more of a UX footgun than user error. Same with the 1M context window being a heavily-touted feature that's apparently not something you want to actually take advantage of...


I'm pretty optimistic that not only does this clean up a lot of vulns in old code, but applying this level of scrutiny becomes a mandatory part of the vibecoding-toolchain.

The biggest issue is legacy systems that are difficult to patch in practice.


I could see some of these corps now being able to issue more patches for old versions of software if they don't have to redirect their key devs onto prior code (which devs hate). As you say though, in practice it is hard to get those patches onto older devices.

I'm looking at you, Android phone makers with 18 months of updates.


Yeah but who pays the enormous cost?

obviously the people responsible for the software. Would you rather anthropic kept the vulns quiet?

Off course not, but there is infinitely more vulnerable software escaping Anthropic's scrutiny. And when AI-powered discovery becomes a necessity, that will lead to concentration of power to these kinds of companies.

Bruce Scheier made a comprehensive analysis of the pros and cons and forces at play for adversary and defenders [1].

I think it's safe to predict yet more money previously directed to us techies will find its way to the Anthropics of this world.

[1] https://www.schneier.com/blog/archives/2026/04/cybersecurity...


I imagine that some levels of patching would be improving as well, even as a separate endeavor. This is not to say that legacy systems could be completely rewritten.


Wait. Wasn't AI supposed to alleviate the burden of legacy code?!


If we have the source and it's easy to test, validate, and deploy an update - AI should make those easier to update.

I am thinking of situations where one of those aren't true - where testing a proposed update is expensive or complicated, that are in systems that are hard to physically push updates to (think embedded systems) etc


Legacy code, not the running systems powered by legacy code


If you’re still an AI skeptic at this point, I don’t know what sort of advancement could convince you that this is happening.


I feel like every new iteration of ways to find good content online: webrings, blogrolls, user upvoting/downvoting, giving everyone their own microblog to share interesting links, ML to learn your own preferences by your behavior - they all worked really well at first, but then eroded significantly once people figured out how to game them.

The economic incentive is overwhelming to corrupt these signals, either directly (link sharing schemes, upvote rings, bots to like your content) or indirectly (shaping your content itself to have the shape of what will be promoted, regardless of its quality).

What you almost want is to use any of these ideas and hope for it to catch on widely enough in your small niche to be useful, but not so much that it comes an optimization target.


Smolnet might be the answer. There really isn't a feasible mechanism for monetizing it. At worst, you could have some text ad embedded. No images. Minimal semantic markup (links, lists, quotes, code, generic text) in the case of gemini/gemtext.


It's CNBC for Silicon Valley - a combination of good background noise, a broad survey of what people are talking about around the valley, and occasionally really great interviews.

They get a lot of guests to do interviews that they wouldn't do elsewhere, in part because they are unabashedly and unapologetically cheerleaders - pro-tech, pro-VC, pro-startup, pro-Big-Tech, etc. They don't grill you like an old-school journalist would about whatever the latest political controversy is, they ring a giant gong when their guest brings up a cool traction or fundraising number.

I would never use it as my only source of news for what's going on in tech, but with a lot of other tech journalism covering the downsides or problems with the industry, there is definitely a niche for them.


Just based on the number of very prominent guests they get to do interviews, they clearly have a lot of viewers in influential tech/vc circles, even if their total audience size isn’t huge.


That's true, but a lot of these people are also competitors. I can't imagine it'll be attractive going to the OpenAI media channel to talk about Gemini or Grok.


An AI company owning a major tech podcast?

Wow, what’s next?

Ecommerce giants owning major newspapers? An aerospace company owning a microblogging platform? Startup accelerators owning tech news aggregators?


If the vast majority of CEOs in this industry are to be believed, any company that achieves "AGI" will be undefeatable, their model improvements and research findings impossible to catch up to. Why risk that being Anthropic, Moonshot or any other competitor to OpenAI by spending your money on this?

The few months/years before "Everyone dies", wouldn't OpenAI want to be the "Anyone" that "build it" and is in control during that time? Unless, of course, OpenAI does not actually believe in that being a possibility, as suspected when they were working on social media...


I admit I'm surprised by the move, from a company that reportedly just talked about how they need to focus more on fewer, more strategic products.

But I also see the potential value. This is an entertaining and highly influential podcast, a lot of top VC's and founders watch it; it definitely punches well above it's audience KPI's in strategic value. I've seen many interviews or op-eds on the platform pretty clearly shape the startup discourse on X.

I also think it should run mostly autonomously, it'll only be as much of a distraction for OpenAI execs as they want it to be.

OpenAI just raised $122 billion (including future commitments), so whatever the purchase price was (we have no diea) is not going to even be a rounding error on their financial resources or their ability to pay their datacenter bills.


This is some insane delusion.

Focus on building a great product and you win. All this other stuff is noise.


states should remove the "purpose" field of incorporation statutes, its too antiquated now and for half a century


Shouldn't OpenAI be focused on becoming profitable and surviving the next 2 years instead of buying podcast toys?


Robinhood did exact same thing, it's more for marketing reach and distribution stuff. Wouldn't be surprised in few years they let it go or spin it down, just paying for a funnel/some narrative control


AI will eat all Media, all of it.


Wait a second...


Is TBPN really considered "major" (seeing as most of the comments I've seen are how no one's heard of them before) or are you just being sarcastic?


YouTube had an estimated $40 billion in ad revenue in 2025: https://techcrunch.com/2026/03/10/youtube-surpasses-disney-p...

And has roughly 2.7 billion monthly active users. This means the average YouTube user brings in around $1.23 per month. When you consider that CPM's can easily swing by 20X based on how wealthy the user demographic is, and willingness to pay a subscription is a strong signal for purchasing power, I would not be at all subscribed if a YouTube premium subscription was revenue-neutral for Google.


Having a facial recognition match make you a suspect and cause the police to ask you some questions doesn't seem completely unreasonable to me. Investigations can certainly begin with weak forms of evidence (like an anonymous tip), you just require a higher standard of evidence for a search warrant, surveillance, or an arrest. A facial recognition match shouldn't be probable cause for an arrest warrant, but it still might be a useful starting point for a detective looking for actual evidence.


It is absolutely not reasonable to use low-quality photos to decide someone halfway across the country with no history of even leaving their local area is 'a suspect'.


You wouldn't know they had no history of leaving their local area unless you interviewed them.


Why does not the investigator have to supply some sort of evidence that she has a history of leaving their local area rather than putting the onus on the accused? This line of argument is halfway to "guilty until proven otherwise".


You and the GP that replied to me are way overstating what it means to be a "suspect". It just means the police are investigating you and consider it a possibility you've committed the crime. On its own, is not a sufficient status to search your home, subpoena your ISP, or arrest you - all of those things require a much higher burden of evidence, and oftena third party (judge's) approval. People routinely become "suspects" on much flimsier evidence than an unreliable software match - if I call in an anonymous tip that I saw you acting suspicious near the crime scene, you will probably become a suspect.

If you'd like, you can replace the term "suspect" in my post with "person of interest", which colloquially implies a lot less suspicion but isn't practically any different in terms of how the police interacts with you.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: