Quote from the CEO of Anthropic in March 2025:
"I think we'll be there in three to six months where AI is writing 90% of the code and then in 12 months we may be in a world where AI is writing essentially all of the code"
From the article, Claude Code is being used extensively to develop Bun already.
> Over the last several months, the GitHub username with the most merged PRs in Bun's repo is now a Claude Code bot. We have it set up in our internal Discord and we mostly use it to help fix bugs. It opens PRs with tests that fail in the earlier system-installed version of Bun before the fix and pass in the fixed debug build of Bun. It responds to review comments. It does the whole thing.
You do still need people to make all the decisions about how Bun is developed, and to use Claude Code.
> You do still need people to make all the decisions about how Bun is developed, and to use Claude Code.
Yeah but do you really need external hires to do that? Surely Anthropic has enough experienced JavaScript developers internally they could decide how their JS toolchain should work.
Actually, this is thinking too small. There's no reason that each developer shouldn't be able to customize their own developer tools however they want. No need for any one individual to control this, just have devs use AI to spin up their own npm-compatible package management tooling locally. A good day one onboarding task!
"Wasting" is doing a lot of work in that sentence.
They're effectively bringing on a team that's been focused on building a runtime for years. The models they could throw at the problem can't be tapped on the shoulder, and there's no guarantee they'd do a better job at building something like Bun.
Let me refer you back to the GP, where the CEO of Anthropic says AI will be writing most code in 12 months. I think the parent comment you replied to was being somewhat facetious.
Same. I don’t understand how people aren’t getting this yet. I’m spending all day thinking, planning and engineering while spending very little time typing code. My productivity is through the roof. All the code in my commits is of equal quality to what I would produce myself, why wouldn’t it be? Sure one can just ask AI to do stuff and not review it and iterate, but why on earth would one do that? I’m starting to feel that anyone who’s not getting this positive experience simply isn’t good at development to begin with.
There's a real schism, isn't there? I don't even type anymore. I've got voice transcription using whisper (which Claude built). I have like three or four Claude instances open in i3wm. I have head tracking so the mouse and therefore focus moves with my head (which Claude built). So I move my head from one to the other and speak prompts!
It's amazing!
My boss has dubbed it "programming at the speed of thought" which I'm sure he's picked up from somewhere. I've seen other people say that.
I think this wound up being close enough to true, it's just that it actually says less than what people assumed at the time.
It's basically the Jevons paradox for code. The price of lines of code (in human engineer-hours) has decreased a lot, so there is a bunch of code that is now economically justifiable which wouldn't have been written before. For example, I can prompt several ad-hoc benchmarking scripts in 1-2 minutes to troubleshoot an issue which might have taken 10-20 minutes each by myself, allowing me to investigate many performance angles. Not everything gets committed to source control.
Put another way, at least in my workflow and at my workplace, the volume of code has increased, and most of that increase comes from new code that would not have been written if not for AI, and a smaller portion is code that I would have written before AI but now let the AI write so I can focus on harder tasks. Of course, it's uneven penetration, AI helps more with tasks that are well-described in the training set (webapps, data science, Linux admin...) compared to e.g. issues arising from quirky internal architecture, Rust, etc.
At an individual level, I think it is for some people. Opus/Sonnet 4.5 can tackle pretty much any ticket I throw at it on a system I've worked on for nearly a decade. Struggles quite a bit with design, but I'm shit at that anyway.
It's much faster for me to just start with an agent, and I often don't have to write a line of code. YMMV.
Sonnet 3.7 wasn't quite at this level, but we are now. You still have to know what you're doing mind you and there's a lot of ceremony in tweaking workflows, much like it had been for editors. It's not much different than instructing juniors.
Maybe he was correct in the extremely literal sense of AI producing more new lines of code than humans, because AI is no doubt very good at producing huge volumes of Stuff very quickly, but how much of that Stuff actually justifies its existence is another question entirely.
Why do people always stop this quote at the breath? The rest of it says that he still thinks they need tech employees.
> .... and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced
(He then said it would continue improving, but this was not in the 12 month prediction.)
I actually like claude code, but that was always a risky thing to say (actually I recall him saying their software is 90% AI produced) considering their cli tool is literally infested with bugs. (Or it least it was last time I used it heavily. Maybe they've improved it since.)
Is this why everyone only seems to know the first half of Dario's quote? The guy in that video is commenting on a 40 second clip from twitter, not the original interview.
I'm curious what people think of quotes like these. Obviously it makes an explicit, falsifiable prediction. That prediction is false. There are so many reasons why someone could predict that it would be false. Is it just optimistic marketing speech, or do they really believe it themselves?
Everybody knows that marketing speech is optimistic. Which means if you give realistic estimates, then people are going to assume those are also optimistic.
What languages and frameworks? What is the domain space you're operating in? I use Cursor to help with some tasks, but mainly only use the autocomplete. It's great; no complaints. I just don't ever see being able to turn over anywhere close to 90% with the stuff we work on.
Hah. It can’t be “I need to spend more time to figure out how to use these tools better.” It is always “I’m just smarter than other people and have a higher standard.”
My stack is React/Express/Drizzle/Postgres/Node/Tailwind. It's built on Hetzner/AWS, which I terraformed with AI.
It's a private repo, and I won't make it open source just to prove it was written with AI, but I'd be happy to share the prompts. You can also visit the site, if you'd like: https://chipscompo.com/
The tools produce mediocre, usually working in the most technical sense of the word, and most developers are pretty shit at writing code that doesn't suck (myself included).
I think it's safe to say that people singularly focused on the business value of software are going to produce acceptable slop with AI.
I don't remember saying I worked with nextjs, shadcn, clerk (I don't even know what that one is), vercel or even JS/TS so I'm not sure how you can be right but I should know better than to feed the trolls.
I suspect you do not know how to use AI for writing code. No offence intended - it is a journey for everyone.
You have to be setup with the right agentic coding tool, agent rules, agent tools (MCP servers), dynamic context acquisition and workflow (working with the agent operate from a plan rather than simple prompting and hoping for the best).
But if you're lazy, don't put the effort in to understand what you're working with and how to approach it with an engineering mindset - you'll be be left on the outside complaining and telling people how it's all hype.
Always the same answer. It's the user not the AI being blown out of proportion. Tell me, where are all those great amazin applications that were coded 95-100% by AI? Where is the great progress the great new algorithms the great new innovations hiding?
My stack is React/Express/Drizzle/Postgres/Node/Tailwind. It's built on Hetzner/AWS, which I terraformed with AI. Probably 90-95% of it is AI driven.
It's a private repo, and I won't make it open source just to prove it was written with AI, but I'd be happy to share the prompts. You can also visit the site, if you'd like: https://chipscompo.com/
"For now, I’ll go dogfood my shiny new vibe-coded black box of a programming language on the Advent of Code problem (and as many of the 2025 puzzles as I can), and see what rough edges I can find. I expect them to be equal parts “not implemented yet” and “unexpected interactions of new PL features with the old ones”.
If you’re willing to jump through some Python project dependency hoops, you can try to use FAWK too at your own risk, at Janiczek/fawk on GitHub."
That doesn't sound like some great success. It mostly compiles and doesn't explode. Also I wouldn't call a toy "innovation" or "revolution".
Thanks for this! I've been looking for a good guide to an LLM based workflow, but the modern style of YouTube coding videos really grates on me. I think I might even like this :D
This one is a bit old now so a number of things have changed (I mostly use Claude Code now, Dynamic context (Skills) etc...) but here's a brief TLDR I did early this year https://www.youtube.com/watch?v=dDSLw-6vR4o
How much time do you think you saved versus writing it yourself if you factored in the time you spent setting up your AI tooling, writing prompts, contexts etc?
1. I didn't say it was a best example, I replied to a comment asking me to "Post a repo" - I posted a repo. 2. Straw man argument. I was asked for a repo, I posted a repo and clearly you didn't look at the code as it's not an "AI code generator".
1. I didn’t ask for a repo.
2. Still wasn’t me. Maybe an AI agent can help you check usernames?
3. Sorry, a plugin for an AI code generator, which is even worse of an example.