> If it was actually productive, then the revenue would increase and affordability wouldn't be a question.
Revenue has increased. Have you seen Meta's latest earnings? +33% revenue - in this economy.
Affordability is not a question. There is a reason companies like Meta have no issue with their engineers spending $1k/day on tokens. It's just not that much compared to how much they make per employee.
It sounds like this has a pretty falsifiable claim here - is the revenue attributed to a tax thing? Then it's clearly not attributable to code.
I agree that the macro picture would speak for itself. Can you point to any macro level detail that is indeed cleanly showing benefits from increased productivity from LLMs?
I agree that you can't draw any conclusions about AI, but their revenue increased by 33% percent. That's just straight income before any taxes or costs are applied.
I completely agree with you. I pointed out replying to the same person that in the same report their ad impressions were up 20% and the price per ad was up 12%, which account for a huge amount chunk of that revenue increase.
All I was saying here was that tax breaks wouldn't impact revenue since revenue is reported before taxes, operating costs and anything else.
That means absolutely nothing in the context of this conversation. It says right in their release ad impressions are up almost 20% and cost per add is up 12%. Those two metrics alone account for most of the increase in their revenue. Absolutely no conclusion can be drawn regarding the impact of AI on those numbers one way or the other.
It's not like they used AI to crank out some new revenue generating piece of software, or massively reduce operating costs. In fact their operating costs rose by 35%.
The problem with HN is that everyone here thinks like an engineer, not like a business owner.
$10k a month on tokens is just not that much when you're already making $2M per engineer. If their productivity has increased even 10% then the spend was well worth it.
Case in point, Meta made 33% more revenue this earnings report. Now you can nitpick and ask for attribution down to the dollar, but macro trends speak for themselves.
Go look up a multi-year chart of their revenue and find the inflection point where the AI made it go up faster (there isn't). In fact revenue growth used to be higher pre-2023.
They were also a lot smaller pre-2023, 33% growth for a company of their size is simply insane. It is entirely likely that 33% simply wouldn't have happened without AI.
Non-Claude client access is not permitted in the terms and conditions, except via API key.
The correct implementation of this condition by Anthropic on the server side would be to block usage by non-Claude apps via Claude's authentication mechanism, and allow it via the per-token API key billing.
Instead of a simple 403 error, which would block usage, they silently redirect to a different billing bucket, which is not ethical behaviour especially since it is based on fuzzy heuristics.
Yeah, at the least it should alert the user that it is happening. Maybe the thinking was alerting it gives people signal on how to get around the restrictions, but having it silently charge from a different bucket isn't the answer either.
I think part of the issue is they were letting people use plan's API for random stuff, so people could do testing or small projects. Then the agents came along and exploded the cost, so they want to restrict those but still let some other usage, which I don't think is tenable.
I'm sure there is some way that they could enforce that all calls are coming from the Claude app or Claude Code. It might be hard to 100% enforce, with stuff running on a user's machine, but they could still could make it quite difficult, where someone has to be intentionally trying beat the system (like stealing encryption keys out of the Claude Code binary or something).
That's fine for procedural tasks, and I understand its value there. But these particular tasks I'm referring to occur on the front lines of research. You can't expect the prompts to be incredibly detailed, since those details are the whole challenge of the problem. I think there is value in having models that are capable of making really good preliminary insights to help guide the research.
I really wanted to get excited about opus but in my own real world usage, I wasn't getting much out of it before hitting my limits. meanwhile i can abuse codex on 5.5 for hours getting a whole lot of work done. Plus, open code and PI are much more fun and interesting harnesses to work from than claude code imho.
I will however say that claude work and design are really great up until i blow its limit.
Revenue has increased. Have you seen Meta's latest earnings? +33% revenue - in this economy.
Affordability is not a question. There is a reason companies like Meta have no issue with their engineers spending $1k/day on tokens. It's just not that much compared to how much they make per employee.
reply