Hacker Newsnew | past | comments | ask | show | jobs | submit | cedws's commentslogin

Is this a bug, security vulnerability, or just an oversight? It’s not clear to me.

As a precaution would it be a good idea to run that reset command for all apps?


This is an oversight in the UI. None of the systems are malfunctioning, it's just that there's no affordance in the UI for the implicit consent flow.

These are considered security UI bugs. They are a subcategory of security bugs, since they result in users lacking control or awareness over permissions. If this were a Chromium bug it would get a CVE.


It’s Apple’s performative “security” (showing popups and asking the user for all sorts of permissions) overlapping with some pragmatic choices about how files and folders work. For me the gap is in Settings & Privacy - 1) it should be clear that the app has been given permission and 2) it should be harder to give permission once you’ve explicitly disabled it. 3) (nice to have) Apple should get rid of permissions that make you restart the app because it’s 2026 lol.

Fuck the bureaucrats responsible for this. I’m so sick of being completely powerless to fight any of this, being forced to sit and watch. Writing to my MP changes nothing. Signing petitions does nothing. The Government doesn’t give a fuck. They’ve had so many golden opportunities to differentiate themselves from the Tories and all they’ve done is carry the torch.

I will vote for any party that promises to rewind this crap, I don’t care what other policies they have. Enough of the nannying.


In my experience asking OpenAI or Anthropic models to do anything FAANG doesn’t want you to do is usually rejected. For example reverse engineering an app, cracking your own device, etc…

I feel like a bit of an idiot because I didn’t know this either. I just assumed OR was another startup burning money to provide models at cost.

OpenRouter is a valuable service but I’ll probably try to run my own router going forward.


I saw this coming. Anthropic wants to shift developers on to their platform where they’re in control. The fight for harness control has been terribly inconvenient for them.

To score a big IPO they need to be a platform, not just a token pipeline. Everything they’re doing signals they’re moving in this direction.


Well, that sucks. Replacing the harness with something task-specific has proven very powerful in my usecases.

But you should correct: Claude is very happy to let you use whatever you want for a harness ... as long as you're on a pay-as-you-go plan. So it's not blocked. It's just not allowed on the $20 per month plan.

First, harnesses can give access to company internal tools (like the ticket queue). You could do this with MCP, but it's much harder, slower and it kind of resists doing this (if you want a bot to solve a ticket, why not start with an entire overview of the ticket in the first request to your model? This can't easily be done with MCP)

Second harnesses can direct the whole process. A trivial example is that you can improve performance in a very simple way: ask "are you sure?" showing the model what it intends to do, BEFORE doing it. Improves performance by 10%, right there. Give a model the chance to look at what it's doing and change it's mind before committing. Then ask a human the same question, with a nice yes/no button. Try that with MCP.

Of course you quickly find a million places to change the process and then you can go and meta-change the process. Like asking an AI what steps should be followed first, then do those steps, most of whom are AI invocations with parts of the tickets (say examine the customer database, extract what's relevant to this problem, ...). Limiting context is very powerful, and not just because it gets you cheaper requests. Get an AI to make relevant context for a particular step before actually doing that step ...


> A trivial example is that you can improve performance in a very simple way: ask "are you sure?" showing the model what it intends to do, BEFORE doing it. Improves performance by 10%

Put it into the "are you sure" loop and you'll see the model will just keep oscillating for eternity. If you ask the model to question the output, it will take it as an instruction that it must, even if the output is correct.


Not in my experience. I mean, it happens. But models can check if their own function calls are reasonable. And that doesn't require dropping the context cache, so it's a lot less expensive than you would probably initially think.

That fab will never be delivered. In five years you might see the manufacturing equivalent of a person dancing in spandex.

The only company that musk own and actually achieve something is spacex. so I believe you. He likes to hype things beyond what is actually possible.

spacex is engineering masterpiece with how they revolutionize the space industry.


Do you know about his other company and what they do?

I was looking for something kind of like this. We wanted to gate release PRs on approval from people in the company that don’t use GitHub.

That's exactly what it does! Is your team or business stakeholders who make the decisions on slack? Understanding this will help me assess if Mo fits well with your team.

Yes they use Slack. Mo seems like a nice product but I think it would be difficult to push for, we try to minimise the number of software vendors we rely on so we prefer to run small things like this ourselves.

>Iran didn’t escalate against anyone except their aggressors

What about the missiles launched at Dubai?


Not only companies, they're going to be taking applications from individual researchers. No doubt that it will only be granted to only established researchers, effectively locking out graduates and those early in their career. This is bad.

They are not unique in this. Apple and Tesla have similar programs. More nuance is warranted here. They are trying to balance the need to enable external research with the need to protect users from arbitrary 3rd parties having special capabilities that could be used maliciously

I understand that, but Anthropic is doing nothing to throw those grassroots researchers a lifejacket. This is the beginning of the end for independents, if it continues on this trajectory then Anthropic gets to decide who lives and who dies. Who says they should be allowed to decide that?

Why should unproven college students be given access to a cyber superweapon?

More than killer AI I'm afraid of Anthropic/OpenAI going into full rent-seeking mode so that everyone working in tech is forced to fork out loads of money just to stay competitive on the market. These companies can also choose to give exclusive access to hand picked individuals and cut everyone else off and there would be nothing to stop them.

This is already happening to some degree, GPT 5.3 Codex's security capabilities were given exclusively to those who were approved for a "Trusted Access" programme.


Describing providing a highly valuable service for money as `rent seeking` is pretty wild.

It could be, formally, if they have a monopoly.

However, I’m tempted to compare to GitHub: if I join a new company, I will ask to be included to their GitHub account without hesitation. I couldn’t possibly imagine they wouldn’t have one. What makes the cost of that subscription reasonable is not just GitHub’s fear a crowd with pitchforks showing to their office, by also the fact that a possible answer to my non-question might be “Oh, we actually use GitLab.”

If Anthropic is as good as they say, it seems fairly doable to use the service to build something comparable: poach a few disgruntled employees, leverage the promise to undercut a many-trillion-dollar company to be a many-billion dollar company to get investors excited.

I’m sure the founders of Anthropic will have more money than they could possibly spend in ten lifetimes, but I can’t imagine there wouldn’t be some competition. Maybe this time it’s different, but I can’t see how.


> It could be, formally, if they have a monopoly.

you have 2 labs at the forefront (Anthropic/OpenAI), Google closely behind, xAI/Meta/half a dozen chinese companies all within 6-12 months. There is plenty of competition and price of equally intelligent tokens rapidly drop whenever a new intelligence level is achieved.

Unless the leading company uses a model to nefariously take over or neutralize another company, I don't really see a monopoly happening in the next 3 years.


Precisely.

I was focusing on a theoretical dynamic analysis of competition (Would a monopoly make having a competitor easier or harder?) but you are right: practically, there are many players, and they are diverse enough in their values and interest to allow collusion.

We could be wrong: each of those could give birth to as many Basilisks (not sure I have a better name for those conscious, invisible, omni-present, self-serving monsters that so many people imagine will emerge) that coordinate and maintain collusion somehow, but classic economics (complementarity, competition, etc.) points at disruption and lowering costs.


> practically, there are many players, and they are diverse enough in their values and interest to allow collusion.

Not only that, but open-weight and fully open-source models are also a thing, and not that far behind.


Why, you thought rented homes aren't valuable?

Rent seeking isn't about whether the product has value or not, but about what's extracted in exchage for that value, and whether competition, lack of monopoly, lack of lock in, etc. keeps it realistic.


My housing is pretty valuable. I pay rent. Which timeline are you in?

Actually you're saying similar things:

Rent-seeking of old was a ground rent, monies paid for the land without considering the building that was on it.

Residential rents today often have implied warrants because of modern law, so your landlord is essentially selling you a service at a particular location.


thanks!


Yes I know that, read your sibling post

Two different "rent"s.

Not really see your sibling post

Well don’t forget we still have competition. Were anthropic to rent seek OpenAI would undercut them. Were OpenAI and anthropic to collude that would be illegal. For anthropic to capture the entire coding agent market and THEN rent seek, these days it’s never been easier to raise $1B and start a competing lab

In practice this doesn't work though, the Mastercard-Visa duopoly is an example, two competing forces doesn't create aggressive enough competition to benefit the consumer. The only hope we have is the Chinese models, but it will always be too expensive to run the full models for yourself.

New companies can enter this space. Google’s competing, though behind. Maybe Microsoft, Meta, Amazon, or Apple will come out with top notch models at some point.

There is no real barrier to a customer of Anthropic adopting a competing model in the future. All it takes is a big tech company deciding it’s worth it to train one.

On the other hand, Visa/Mastercard have a lot of lock-in due to consumers only wanting to get a card that’s accepted everywhere, and merchants not bothering to support a new type of card that no consumer has. There’s a major chicken and egg problem to overcome there.


> In practice this doesn't work though, the Mastercard-Visa duopoly is an example,

MC/Visa duopoly is an example of lock-in via network effects. Not sure that that applies to a product that isn't affected by how many other people are running it.


Chinese competition can always be banned. Example: Chinese electric car competition

Just in one particular country. That hurts their labs, but there are ~190 other countries in the world for Chinese to sell their products to, just like they do with their cars.

And businesses from these other countries would happily switch to Chinese. From security perspective both Chinese and US espionage is equally bad, so why care if it all comes down to money and performance.


Also Chinese smartphones. Huawei was about 12-18 months from becoming the biggest smartphone manufacturer in the world a few years ago. If it would have been allowed to sell its phones freely in the US I'm fairly sure Apple would have been closer to Nokia than to current day Apple.

If Huawei was never banned from using TSMC, they'd likely have a real Nvidia competitor and may have surpassed Apple in mobile chip designs.

They actually beat Apple A series to become the first phone to use the TSMC N7 node.


I don't think it will matter too much in the long run, 8 of the top 10 smartphone manufacturers are Chinese, there's nothing the US government can really do.

That's what OP was saying, I think, noting that running them locally won't be a solution.

> More than killer AI I'm afraid of Anthropic/OpenAI going into full rent-seeking mode so that everyone working in tech is forced to fork out loads of money just to stay competitive on the market.

You should be more concerned about killer AI than rent seeking by OpenAI and Anthropic. AI evolving to the point of losing control is what scientists and researchers have predicted for years; they didn’t think it would happen this quickly but here we are.

This market is hyper competitive; the models from China and other labs are just a level or two below the frontier labs.


The thing is that the current models can ALREADY replicate most software-based products and services on the market. The open source models are not far behind. At a certain point I'm not sure it matters if the frontier models can do faster and better. I see how they're useful for really complex and cutting edge use cases, but that's not what most people are using them for.

but you are assuming that the magical wizards are the only ones who can create powerful AIs... mind you these people have been born just few decades ago. Their knowledge will be transferred and it will only take a few more decades until anyone can train powerful AIs ... you can only sit on tech for so long before everyone knows how to do it

It's not a matter of knowledge, it's a matter of resources. It takes billions of dollars of hardware to train a SOTA LLM and it's increasing all the time. You cannot possibly hope to compete as an independent or small startup.

> It takes billions of dollars of hardware to train a SOTA LLM and it's increasing all the time.

True, but it's also true that the returns from throwing money to the problem are diminishing. Unless one of those big players invents a new, propriatery paradigm, the gap between a SOTA model and an open model that runs on consumer hardware will narrow in the next 5 years.


Eventually these super expensive SXM data center GPUs will cost pennies on the dollar, and we’ll be able to snatch up H200s for our homelabs. Give it a decade.

Also eventually these WEIGHTS will leak. You can’t have the world’s most valuable data that can just be copied to a hard drive stay in the bottle forever, even if it’s worth a billion dollars. Somehow, some way, that genie’s going to get out, be it by some spiteful employee with nothing to lose, some state actor, or just a fuck up of epic proportions.


at the point where those gpus cost pennies, they likely won't even be worth the electricity that goes into them, better models would run on laptops.

Presumably, the hardware to run this level of model will be democratized within the timeframe of the parent comment.


Unless, of course, the powerful manage to scare everyone about how the machines will kill us all and so AI technology needs to be properly controlled by the relevant authorities, and anyone making/using an unlicensed AI is arrested and jailed.

With Gemma-4 open and running on laptops and phones I see the flip side. How many non-HN users or researchers even need Opus 4.6e level performance? OpenAI, Anthropric and Google may be “rent seeking” from large corporations — like the Oracles and IBMs.

Everyone, once AI diffuses enough. You’ll be unhireable if you don’t use AI in a year or two.

You know, they have competitors?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: