> The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms.
Isn't this what the free software movement wanted? Code available to all?
Yes, code is cheap now. That's the new reality. Your value lies elsewhere.
You can lament the loss of your usefulness as a horse buggy mechanic, or you can adapt your knowledge and experience and use it towards those newfangled automobiles.
> Isn't this what the free software movement wanted? Code available to all?
But this is not that. The current situations is closer to "what's yours is mine and what's mine is mine".
I have been releasing my writings under a Creative Commons Attribution-ShareAlike license which requires attribution and that anything built upon the material to be distributed "under the same license as the original". And yet I have no access to OpenAI's built-upon material (I know for a fact they scrape my posts) while they get my data for free. This is so far legal, but it's probably not ethical and definitely not what the free software movement wanted.
> Isn't this what the free software movement wanted? Code available to all?
Available to all yes. Not available to the giant corpos while the lone hobbyist still fears getting sued to oblivion. In fact that's pretty much the opposite of what the free software movement wanted.
Also the other thing the free software movement wanted was to be able to fix bugs in the code they had to use, which AI is pulling us further and further away from.
No, the free software movement wants that the source code of the software you use be available to you to modify it if you wish. AI does not necessarily do that.
AI makes the entirety of the software engineering profession available to you. All you have to do is ask the right way, and you can build in days what once took months or years.
Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.
Closed source is no longer the moat it was, and so keeping the source code to yourself is only going to hurt you as people pass you over for companies who realize this, and strive to make it easier for your LLM to figure their systems out.
> Decompiling and re-engineering proprietary code has never been easier. You almost don't even need the source code anymore. The object code can be examined by your LLM, and binary patches applied.
Jesus christ.
"The people who wanted everyone to have a home should be happy with the invention of the lockpick. You can just find a nice house and open the lock and move in. Ignore the lockpick company charging essentially whatver they want for lockpicks or how they got accesss to everyones keyfob, or the danger of someone breaking into your house"
That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...
The open source community wants people to upskill, people become tech literate, free solutions that grow organically out of people who care, features the community needs and wants and people having the freedom to modify that code to solve their own circumstances.
> That is basically your argument. Like AI is a copyright theft machine, with companies owning the entire stack and being able to take away at will, and comitting crimes like decompiling source code instead of clean room is not a selling point either...
Stop trying to make this into some abstract argument. It's not an argument anymore. It's already happened.
How one might choose to characterize the reality, is irrelevant. A vast (and growing) amount of source code is more open, for better or worse. Granted, this is to the chagrin of subgroups that had been pushing different strategies.
> Stop trying to make this into some abstract argument.
As you mentioned, it's not an abstract argument. It's statements of fact.
> A vast (and growing) amount of source code is more open...
No, not at all.
1) If you honestly believe that major tech companies will permit both copyright- and license-washing of their most important proprietary code simply because someone ran it through an LLM, you're quite the fool. If someone "trained" an LLM on -say- both Windows 11 and ReactOS, and then used that to produce "ReactDoze" while being honest about how it was produced, Microsoft would permanently nail them to the wall.
2) The LLMs that were trained on the entirety of The Internet are very, very much not open. If "Open"AI and Anthropic were making available the input data, the programs and procedures used to process that data, and all the other software, input data, and procedures required to reproduce their work, then one could reasonably entertain the claim that the system produced was open.
This is looking at the current situation through the old lens.
That ship has sailed. The revolution is happening. We live in a new reality now, one where we're still trying to figure out what rules should even be.
And there will be winners and losers, and copyright and patent law will be modified in an attempt to tame the chaos, with mixed results because of all of the powerful players on both ends.
You can live on the front of it for high risk/reward, or at the back for safety. But either way, you're going to exist in this new reality and you need to decide your risk appetite.
Your set of statements and their surrounding context reminds me very much of the mass grave scene in Kubrick's Vietnam War movie Full Metal Jacket: <https://www.youtube.com/watch?v=670Y3ehmU74>
> Stop trying to make this into some abstract argument. It's not an argument anymore. It's already happened.
yes and lockpicks also exist. Promotting the ability to break into homes when people are talking about the housing crisis is a crazy, short sighted and frankly embarrasing position to take.
And mischaracterising the people in the open source community as belonging to that ideology is insulting.
> A vast (and growing) amount of source code is more open
You are missusing the word open here, for accesible. Having an open house, and breaking into someone's home are not the same thing, even if the door ends up open either way.
> Granted, this is to the chagrin of subgroups that had been pushing different strategies.
Taking unethical shortcuts that ultimately lead to an even worse outcome is not a cause of chagrin, its a cause of deep and utter terror and embarrasment.
Wanting people to own their skills and tech stack and be informed, smart and engaged is a goal that "just ask the robot you dont control to break into a corporate codebase and copy it" is not even remotely close to helping get close to.
This argument commits the same fallacy as the argument against piracy; copying is not stealing, because the original still remains. A lockpicked and squatted house means someone else does not have that house, it's a zero sum game which information which is freely copyable does not align with.
That only works if you assume that the exclusive value is in the object and not the labour.
The reproduction of the object is essentially free in the internet, but the labour to produce it isn't.
If I spent 3 years making my codebase, and you copy paste the git repo, yeah your access to the information is not going to replace the original. But your labour cost is 0 and you can undercut the 3 years of expense, loans or debt I adquired to produce it.
Btw the FBI murdered Aaron Swartz for attempting to open access to research papers, Mark Zuckemberg admitted to stealing those ssame papers through libgen and showed off the results of Llama and his stock price went up.
I think the piracy argument falls apart when the class warfare and 2 tier justce system is openly weaponised towards open access
Labor doesn't have value inherently, it's about what is produced by said labor. These days even the labor to create something falls to zero via LLMs so I'm not sure the point is valid these days.
Almost nothing does. Value is largely subjective. You deciding it is irrelevant to you is as inherently worthless as the marxist ideal that labour is the maximal value of society.
The non subjective opinion is that there is a necesary amount of work/energy requiered to create things and that the created things can be consumed/used by others.
LLMs do not reduce labor to 0, the energy to power the GPU, the labor to create the gpus, the labor to train the models is all there, as well as all the labor to produce the original material the LLM is trained on. Even if the subjective experience of someone consuming the created thing is the same.
i can say with a pretty high confidence level that few people in the free software movement want the closed off black boxes these companies are locking away.
they’re not free in any sense of the word. from price to openness of the models. would openai cry if every bit of their models were wide open for us to use however we see fit? if so, then it’s not free, again, in any definition of the word.
Because those "governors" need to first ensure that their grids and home electrical systems are equipped to handle a solar system pumping into the house power system.
You speak as though that were a bad thing. I'd rather not have people accidentally burning their houses down.
Once it's approved for an area, you go to your local shop, buy an approved PV system, and plug it in. No fuss, no worries, and your insurer must cover it.
I keep hearing this from time to time, and hey, if taking notes by hand helps you, go for it. More power to you.
But I'm not you. What works for you may not be a panacea. I work best with notes in a text editor, in markdown. I like to be able to move thoughts around, rearrange them, refine them. That also makes me remember them better. Handwritten notes are not conducive to that.
The ads are annoying, and I'm glad Microsoft will stop doing it.
One thing I do like, however, is how agents add themselves as co-authors in commit messages. Having a signal for which commits are by hand and which are by agent is very useful, both for you and in aggregate (to see how well you are wielding AI, and the quality of the code being generated).
Even when I edit the commit message, I still leave in the Claude co-author note.
AI coding is a new skill that we're all still figuring out, so this will help us develop best practices for generating quality code.
I don't quite see the benefit of this, personally.
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it? The quality+understanding bar shouldn't change just because "oh idk claude wrote this part". You don't get extra leeway just because you saved your own time writing the code - that fact doesn't benefit me/the project in any way.
Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code).
The code is either good or it isn't, and you either understand it or you don't. Whether you or claude wrote it is immaterial.
You're quite right that the quality of the code is all that matters in a PR. My point is more historical.
AI is a very new tool, and as such the quality of the code it produces depends both on the quality of the tool, and how you've wielded it.
I want to be able to track how well I've been using the tool, to see what techniques produce better results, to see if I'm getting better. There's a lot more to AI coding than just the prompts, as we're quickly discovering.
The tools are still in their infancy, but it would likely be a series of metrics such as complexity, repetition, test coverage issues (such as tests that cover nothing meaningful), architectural issues that remain unfixed far beyond the point where it would have been more beneficial to refactor, superfluous instructions and comments, etc.
As a reviewer, I do care. Sure, people should be reviewing Claude-generated code, but they aren't scrutinizing it.
Claude-generated code is sufficient—it works, it's decent quality—but it still isn't the same as human written code. It's just minor things, like redundant comments that waste context down the road, tests that don't test what they claim to test, or React components that reimplement everything from scratch because Claude isn't aware of existing component libraries' documentation.
But more importantly, I expect humans to be able to stand by their code, and at times defend against my review. But today's agents continue to sycophantically treat review comments like prompts. I once jokingly commented on a line using a \u escape sequence to encode an em dash, how LLMs would do anything to sneak them in, and the LLM proceeded to replace all — with --. Plus, agents do not benefit from general coding advice in reviews.
Ultimately, at least with today's Claude, I would change my review style for a human vs an agent.
I agree with a lot of this, but thats kind of my point: if all these things (poor tests, non-DRY, redundant comments, etc) were true about a piece of purely human-written code then I would reject it just the same, so whats the difference? Likewise, if claude solely produced some really clean, concise and rigorously thought-through and testsed piece of code with a human backer then why wouldn't I take it?
As you allude to (and i agree), any non-trivial quantity of code, if SOLELY written by claude will probably be low-quality, but this is apparent whether I know its AI beforehand or not.
I am admittedly coming at this as much more of an AI-hater than many, but I still don't really get why I'd care about how-much or how-little you used AI as a standalone metric.
The people who are using AI "well" are the ones producing code where you'd never even guess it involved AI. I'm sure theres linux kernel maintainers using claude here and there, its not like they expect to have their patches merged because "oh well i just used claude here don't worry about that part".
(But also yes, of course I'm not going to talk to claude about your PR, I will only talk to you, the human contributor, and if you don't know whats up with the PR then into the trash it goes!)
Knowing if an AI contributed is good data. The human is still responsible for the content of the PR.
While code is good or not, evaluating it is a bit of a subjective exercise. We like to think we are infallible code evaluating machines. But the truth is, we make mistakes. And we also shortcut. So knowing who made the commit, and if they used AI can help us evaluate the code more effectively.
It’s not about who wrote it, but about who is submitting it. The LLM co-author indicates that the agent submitted it, which is a contraindication of there being a human taking responsibility for it.
That being said, it also matters who wrote it, because it’s more likely for LLMs to write code that looks like quality code but is wrong, than the same is for humans.
> Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it?
Maybe one day we can say that, but currently, it matters a lot to a lot of people for many reasons.
> Likewise, leaving AI attribution in will probably have the opposite effect as well, where a perfectly good few lines of code gets rejected because some reviewer saw it was claude and assumed it was slop. Neither of these cases seems helpful to anyone (obviously its not like AI can't write a single useable line of code)."
That was my point here, it is a false signal in both directions.
According to you it’s all false. I don’t agree, and it certainly shouldn’t just be taken as a given.
For instance, I would want any AI generated video showing real people to have a disclaimer. Same way we have disclaimers when tv ads note if the people are actors or not with testimonials and the like. That is not only not false, but is actually a useful signal that helps present overly deceptive practices.
I don't see what the "deceptive practices" would be though - you can just look at the code being submitted, there isn't really the same background truth involved as with "did the thing in this video actually happen?" "do these commercial people actually think this?"
If I have a block of human code and an identical block of llm code then whats the difference? Especially given that in reality it is trivial to obfuscate whether its human or LLM (in fact usually you have to go out of your way to identify it as such).
I am an AI hater but I'm just being realistic and practical here, I'm not sure how else to approach all this.
It tells you what average quality to expect, and to look out for beginner-level mistakes and straight up lying accompanied with fine bits of code. Not sure why you wouldn't want that context.
Whoever is submitting the code is still responsible for it, why would the reviewer care if you wrote it with your fingers or if an LLM wrote (parts of) it?
The problem is that submitters often do not feel responsible for it anymore. They will just feed review comments back to the LLM and let the LLM answer and make fixes.
This is disrespectful of the maintainers' time. If the submitter is just vibe/slop coding without any effort on their part, it's less work to do it myself directly using an LLM than having to instruct someone else's LLM through GitHub PR comments.
In this case it's better to just submit an issue and let me just implement it myself (with or without an LLM).
If the PR has a _co-authored by <LLM>_ signal, then I don't have to spend time giving detailed feedback under the assumption that I am helping another human.
Yes. I don't mind AI submissions to my hobby projects as long as there's a person behind it. Only fully automated slop I mind. Before AI I used to get all sorts of PRs from people changing a comment or a line of documentation just so they can get more green squares on their GitHub summary. Plus ça change....
A line at the bottom of PRs, reports, etc that says "authored with the help of Copilot" is fine.
So, philosophically speaking, I agree with this approach. But I did read that there was some speculation regarding the future legal implications of signalling that an AI wrote/cowrote a commit. I know Anthropic's been pretty clear that we own the generated code, but if a copyright lawsuit goes sideways (since these were all built with pirated data and licensed code) — does that open you or your company up to litigation risk in the future?
And selfishly — I'd rather not run into a scenario where my boss pulls up GitHub, sees Claude credited for hundreds of commits, and then he impulsively decides that perhaps Claude's doing the real work here and that we could downsize our dev team or replace with cheaper, younger developers.
Let your employer's lawyers worry about that. If they say not to use LLMs, then you should abide by that or find a new job. But if they don't care, then why should you?
As for hobby projects, I strongly encourage you to not care. You aren't going to lawyer up to sue anybody, nor is anybody going to sue you, so YOLO. Do whatever satisfies you.
$ yoloai apply bugfix
Target: /home/ks/tmp/b64
Commits to apply (1):
9db260b33bcd Fix bit mask in base64 encoding
Apply to /home/ks/tmp/b64? [y/N] y
1 commit(s) applied to /home/ks/tmp/b64
Now the commit claude made inside the sandbox has been applied to my workdir:
$ git log
commit 5b0fc3a237efe8bbc9a9e1a05f9ce45d37d38bfa (HEAD -> main)
Author: Karl Stenerud <kstenerud@gmail.com>
Date: Mon Mar 30 05:28:21 2026 +0000
Fix bit mask in base64 encoding
Corrected the bit mask for the first character extraction from 0x3E to 0x3F to properly extract all 6 bits.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
commit 31e12b62b0c3179f3399521d7c4326a8f6130721 (tag: init)
The important thing here is that Claude was not able to reach anything on the network except its own API, and nothing it did ever touched my work dir until I was happy with the changes and applied them.
It also doesn't get access to my credentials, so it couldn't push even if it did have network access.
This is the problem yoloAI (see below comment) is built around. The merge step is `yoloai diff` / `yoloai apply`: the agent works against a copy of your project inside the container, you review the diff, you decide what lands.
jai's -D flag captures the right data; the missing piece is surfacing it ergonomically. yoloAI uses git for the diff/apply so it already feels natural to a dev.
One thing that's not fully solved yet: your point about .git/hooks and .venv being write vectors even within the project dir. They're filtered from the diff surface but the agent can still write them during the session. A read-only flag for those paths (what you're considering adding to jai) would be a cleaner fix.
I've already shipped this and use it myself every day. I'm the author of yoloAI (https://github.com/kstenerud/yoloai), which is built around exactly this model.
The agent runs inside a Docker container or containerd vm (or seatbelt container or Tart vm on mac), against a full copy of your project directory. When it's done, `yoloai diff` gives you a unified diff of everything it changed. `yoloai apply` lands it. `yoloai reset` throws it away so you can make the agent try again. The copy lives in the sandbox, so your working tree is untouched until you explicitly say so.
The merge step turned out to be straightforward: just use git under the hood. The harder parts were: (a) making it fast enough that the copy doesn't add annoying startup overhead, (b) handling the .pyc/.venv/.git/hooks concern you raised (they're excluded from the diff surface by default), and (c) credential injection so the agent can actually reach its API without you mounting your whole home dir.
Leveraging existing tech is where it's at. Each does one thing and does it well. Network isolation is done via iptables in Docker, for example.
Still early/beta but it's working. Happy to compare notes if you're building something similar.
Iran has been preparing for this war for 40 years. So has Israel. They will engage in a battle of supremacy over the Middle East. Both want the USA knocked out so that the Americans can't use their influence there anymore (both consider the USA a nuisance).
As soon as ground troops land in Iran, it's over for the USA. As it is, oil and goods shipping via the Persian Gulf and the Red Sea will be controlled by Iran for a very long time to come. All Iran has to do is withstand the pummeling, which it very likely will do. And they'll get plenty of support from China, since this plays into the South China Seas plan quite nicely as the USA moves carrier after carrier out of Asia.
It's relative. We're in a pretty bad spot relative to where we were before the attack, and so is the world economy.
The Iranian regime is doing much better so far, relative to where they should be after a joint military attack from the US/Israel and maybe even relative to where they were just a few months ago.
The previous Ayatollah was 86 and had multiple bouts of pancreatic cancer. He was on deaths door, Iran was destabilizing with bouts of protest and repression, the regime itself suffered major military blows, and a potentially rocky and fractured transition was imminent.
Thanks to the war, the regime survived a transition, and seems consolidated around the son of the former Ayatollah, who's entire family was killed by our strikes, and the US seems largely impotent as Iran chokes off a large portion of the worlds oil supply and strikes at energy assets in the ME.
1: Protecting against bad things (prompt injections, overeager agents, etc)
2: Containing the blast radius (preventing agents from even reaching sensitive things)
The companies building the agents make a best-effort attempt against #1 (guardrails, permissions, etc), and nothing against #2. It's why I use https://github.com/kstenerud/yoloai for everything now.
- yoloai new mybugfix . -a # start a new sandbox using a copy of CWD as its workdir
- # tell the agent to fix the broken thing
- yoloai diff mybugfix # See a unified diff of what it did with its copy of the workdir
- yoloai apply mybugfix # apply specific git commits it made to the real workdir, or the whole diff - your choice
- yoloai destroy mybugfix
The diff/apply makes sure that the agent has NO write access to ANYTHING sensitive, INCLUDING your workdir. You decide what gets applied AFTER you review what crazy shit it did in its sandbox copy of your workdir.
sorry. At this point it's just a meme how people give llms open access to internet, literally all passwords and all tokens and then they are actually surprised when something bad happens "but I run it in docker"
even if docker sandbox escapes didn't exist it's just chef's kiss
Isn't this what the free software movement wanted? Code available to all?
Yes, code is cheap now. That's the new reality. Your value lies elsewhere.
You can lament the loss of your usefulness as a horse buggy mechanic, or you can adapt your knowledge and experience and use it towards those newfangled automobiles.
reply