There seems to be a bit of a trend that dev adjancent open source companies with not much monetization strategy are being bought off by AI giants. Most prominently Anthopic bought bun, OpenAI is buying Astral. So that may be the exit plan too.
Not sure what the business logic is. Maybe they are mostly acquihire. Or the companies just have so much money to throw around they just spray it everywhere. Whatever the reason, if the tools remain open source, the result for devs is probably better open source tools. At least until enshittification begins when the companies run out of funding, but hopefully the tools remain forkable.
The product is being provided to some of the most influential companies. That can definitely serve to Anthropic's advantage. (Regardless, I suspect the hype is real.)
Imagine you were making purchasing decisions about which LLM-based coding tool to use.
If one of the possible vendors convinces you that that they have a next gen model that is so powerful it found 20+ year old bugs in a hardened operating system, that would undoubtedly have an influence on your decision even if you are only buying the current model.
Yeah someone should’ve told that to Donald (Knuth)
/s
For those who don’t know, Knuth implemented the typesetting system TeX just to make sure his book’s typesetting was correct.
You can pretty much only innovate when you reject the blackbox and decide to make a better one.
Otherwise you’re likely implementing something you could probably get off-the-shelf, which is ok, but also something that you could just… not implement.
You may be a bit overconfident about how clear you will be with your comments.
The “dipshit” doesn’t mess everything up for fun. They don’t understand the comments written by the previous “dipshit” and thus are unable to update the comments.
Oh really? I'm overconfident in my ability to write and read simple clear text notes?
Here's what I think. I think you guys heard the "self-documenting code" BS and ate it up, and now you're grasping at straws to defend your cargo cult position, inventing these "problems" to justify it.
If you're looking at some code and there's a comment saying something that doesn't make sense to you, maybe that's a clue that you're missing a puzzle piece and should take a step back maybe talk to some people to make sure you're not messing things up? Maybe, for a non-dipshit, that comment they don't understand could actually be helpful if they put some effort into it?
Also just to be clear I don't think this is a likely occurrence unless someone doesn't know squat about the codebase at all - my comments generally assume very little knowledge. That's their whole purpose - to inform someone (possibly me) coming there without the necessary background knowledge.
It just isn't feasible to include the why of everything in the code itself. And it sure as hell is better to include some info as comments than none at all. Otherwise a bug will often be indistinguishable from a feature.
And I don't think dipshits mess things up for fun. I think they just suck. They're lazy and stupid, as most developers are. If I'm there I can use reviews etc to help them suck less, if I'm not they're free to wreck my codebase with reckless abandon and nothing I do will make any difference. I cannot safeguard my codebase against that so there's no point in trying and the fact that this is your argument should make you stop and reconsider your position because it's far fetched as fuck.
I’ll also note that I’ve worked with developers who didn't like git blame because someone might misinterpret the results. I think some people want excuses for poor work, rather than just working as correctly as possible.
Keep in mind that thoughts similar to yours produce the same output from an LLM. You may be thinking “my thoughts are original” and I would agree, but we won’t be able to see the original parts when it runs through an LLM.
I realized that running one’s own writing through an LLM reduces the amount of information in it. Sort of like washing the nutrients of a fruit.
When we write about something, inevitably, things about us leak into our writing. How we think about this thing, our value judgments about it, how much we thought about it, whether our perspective and thoughts on it are aged or fresh all come through, even if we don’t intend to. All of this information builds trust, helps the reader empathize and see our point of view.
When our writing passes through an LLM, most of these are simply lost. An average expression of those thoughts with all the sharp edges - its character, essence - removed comes out.
All writing is opinionated, and when it runs through an LLM, it comes out opinion-less. I noticed that I don’t care for opinion-less writing. Or people.
One exception is the official Python documentation. I recently read some of the new documentation, and realized that it reads almost exactly as I first read it in 2010. I couldn’t believe it. Low opinion, high information density. I know for a fact that it has opinions in parts, but it’s shockingly infrequent.
We are way lower. At least this comment allows uncertainty.
The AI doomer literature is entirely from an armchair, with 100% certainty about the outcome, high confidence predictions about its timing. It’s literally fiction.
git ≠ GitHub
reply