There is a whole world of expensive virtual samples instruments that can very convincingly replicate an orchestral performance in a DAW. See Spitfire Audio, EastWest, Cinesamples, etc.
The fetch api has been widely available in browsers for a decade now. And in node since 18. A competent developer could whip up a more axios-like library with fetch in a day easily. You can do all the cool things like interceptors with fetch too.
Yet most developers I work with just use it reflexively. This seems like one of the biggest issues with the npm ecosystem - the complete lack of motivation to write even trivial things yourself.
> A competent developer could whip up a more axios-like library with fetch in a day easily.
Then you would have created just an axios clone. AKA re-inventing the wheel. The issue isn't the library itself, but rather the fact that it's popular and provided a large enough attack surface.
You can actually just clone the axios package and use it as is from your private repo and you would not have been affected.
I think we're entering an era where "re-inventing the wheel" is actually a completely valid defensive posture. The cost is so low relative to the reduction in risk.
Axios really does a lot of other great things. I would argue that Fetch could’ve easily been Axios-lite. Axios handles errors better, has interceptors, parses JSON for you, etc.
The multiple supply chain attacks against NPM packages would, of course, be solved if we simply stop using third-party libraries.
I guess the point I’m making is that a lot of popular JavaScript libraries were created to address deficiencies in the core api that don’t exist anymore, but we keep using these libraries mostly because of entropy and familiarity.
True. In my case it’s also out of general tiredness and disinterest. A good newsletter that catches up on useful things in the ecosystem might help, otherwise I can’t be bothered anymore to keep up. 5 years ago that still seemed like a good way to spend my time. I wonder if other developers are just as jaded.
Ok, well have AI write some table stakes for you in 10 minutes with 100% test coverage and only provide exactly what "table stakes" you are missing without any bells and whistles.
I guess to me this doesn't seem like that big of a deal? I mean if you have a 100 million subscribers, do you really care much about a few $million increase? I thought the big players like Youtube had already moved to open source codecs already anyway.
What the people excited about the race to the bottom scenario don’t seem to understand is that it doesn’t mean low skill people will suddenly be more employable, it means fewer high skill people will be employable.
No one will be eager to employ “ai-natives” who don’t understand what the llm is pumping out, they’ll just keep the seasoned engineers who can manage and tame the output properly. Similarly, no one is going to hire a bunch of prompt engineers to replace their accountants, they’ll hire fewer seasoned accountants who can confidently review llm output.
And those that do have not yet understood what will happen when those seasoned workers retire, and there are no juniors or mid that can grow because they have been replaced by AI
> What the people excited about the race to the bottom scenario
I'm not excited about it. I just see it as a logical consequence if what people are predicting comes to pass, and I've thought about how I will deal with that.
They will still be turning out the same problematic code in a few years that they do now, because they aren’t intelligent and won’t be intelligent unless there is a fundamental paradigm shift in how an LLM works.
I use LLMs with best practices to program professionally in an enterprise every day, and even Opus 4.6 still consistently makes some of the dumbest architectural decisions, even with full context, complete access to the codebase and me asking very specific questions that should point it in the right direction.
I keep hearing “they aren’t intelligent” and spit out “crap code”. That’s not been my experience. LLMs prevented and also caught intricate concurrency issues that would have taken me a long time.
I just went “hmmm, nice” and went on. The problem there is that I didn’t get that sense of accomplishment I crave and I really didn’t learn anything. Those are “me” problems but I think programmers are collectively grappling with this.
They are not intelligent. Full stop. Very sophisticated next word prediction is not intelligence. LLMs don’t comprehend or understand things. They don’t think, feel or comprehend things. That’s just not how they work.
That said, very sophisticated next word predictors can and sometimes do write good code. It’s amazing some of the things they get right and then can turn around and make the weirdest dumbest mistakes.
It’s a tool. Sometimes it’s the right tool, sometimes it’s not.
I think there’s a pretty good argument to be made that this is discriminatory. Certainly it’s not something I would tolerate as a consumer. I suspect there will be heavy pressure to regulate this practice out of existence if it catches on.
I have an unsubstantiated understanding Ai investment is disproportionately capital intensive, and at least partially funded by self-fulfilling dreams of employment disappearing. Parts of the economy are already in recession and they would not mind not being alone in it.
Management has made it very clear that we’re still responsible for the code we push even if the llm wrote it. So there will be no blaming Claude when things fall apart.
reply