Around 70% of security vulnerabilities are about memory safety and only exist because software is written in C and C++. Because most vulnerabilities are in newly written code, Google has found that simply starting writing new code in Rust (rather than trying to rewrite existing codebases) quickly brings the number of found vulnerabilities down drastically.
It comes from all his reporters being teenagers in developing countries with older models, and people using SOTA models who know how to qualify a potential vulnerability having much bigger fish to fry than curl. curl is a meaningful target, but it's in nobody's top tier.
You can't just write Rust in a part of the codebase that's all C/C++. Tools for checking the newly written C/C++ code for issues will still be valuable for a very long time.
You actually can? A Rust-written function that exports a C ABI and calls C ABI functions interops just fine with C. Of course that's all unsafe (unless you're doing pure value-based programming and not calling any foreign code), so you don't get much of a safety gain at the single-function level.
No, this is false. For Rust codebases that aren't doing high-peformance data structures, C interop, or bare-metal stuff, it's typical to write no unsafe code at all. I'm not sure who told you otherwise, but they have no idea what they're talking about.
It's the classic "misunderstanding" that UB or buggy unsafe code could in theory corrupt any part of your running application (which is technically true), and interpreting this to mean that any codebase with at least one instance of UB / buggy unsafe code (which is ~100% of codebases) is safety-wise equivalent to a codebase with zero safety check - as all the safety checks are obviously complete lies and therefore pointless time-wasters.
Which obviously isn't how it works in practice, just like how C doesn't delete all the files on your computer when your program contains any form of signed integer overflow, even though it technically could as that is totally allowed according to the language spec.
If you're talking about Rust codebases, I'm pretty sure that writing sound unsafe code is at least feasible. It's not easy, and it should be avoided if at all possible, but saying that 100% of those codebases are unsound is pessimistic.
One feasible approach is to use "storytelling" as described here: https://www.ralfj.de/blog/2026/03/13/inline-asm.html That's talking about inline assembly, but in principle any other unsafe feature could be similarly modeled.
It's not impossible, it is just highly unlikely that you'll never write a single safety-related bug - especially in nontrivial applications and in mixed C-plus-Rust codebases. For every single bug-free codebase there will be thousands containing undiscovered subtle-but-usually-harmless bugs.
After all, if humans were able to routinely write bug-free code, why even worry about unsoundness and UB in C? Surely having developers write safe C code would be easier than trying to get a massive ecosystem to adopt a completely new and not exactly trivial programming language?
Rust is not really "completely new" for a good C/C++ coder, it just cleans up the syntax a bit (for easier machine-parsing) and focuses on enforcing the guidelines you need to write safe code. This actually explains much of its success. The fact that this also makes it a nice enough high-level language for the Python/Ruby/JavaScript etc. crowd is a bit of a happy accident, not something that's inherent to it.
Good developers only write unsafe rust when there is good reason to. There are a lot of bad developers that add unsafe anytime they don't understand a Rust error, and then don't take it out when that doesn't fix the problem (hopefully just a minority, but I've seen it).
The joke is, more or less, you can reduce everyone into two piles. But that's almost assuredly wrong.
It's very very hard to have what most people would call "autistic" levels of rationality in discourse in this world. But if you hold yourself to high standards, you quickly compute the logical argument OP is making (people who were excited were gullible marks etc. etc.) and realize it's wrong in several different ways (happy to explicate if unclear).
This is, of course, very easy if you were A) excited and B) didn't think it'd come to pass. Also observing that A does not imply B and vice versa is the minimally sufficient observation to rule out OPs comment being rational*
* n.b. "rational" means something akin to "not affected by a psychoactive disorder" in everyday discourse. In philosophy / logic class, it means, the statements x conclusion are internally coherent. "The moon is made of cheese because it is yellow" is rational, "The moon is made of cheese because Teddy Roosevelt likes cheese" is irrational. "The moon is made of cheese because the Pope likes cheese" is rational with the implied premises "God controls all, and he loves the pope"
In some jurisdictions (e.g. the UK) the law is already clear that you own the copyright. In the US it is almost certain that you will be the author. The reports of cases saying otherwise I have been misreported - the courts found the AI could not own the copyright.
It's beyond obvious that a LLM cannot have copyright, any more than a cat or a rock can. The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law. As far as I can see, it depends on the extent of the user's creative effort in controlling the LLM's output.
It may be obvious to you, but it has lead to at least one protracted court case in the US: Thaler v. Perlmutter.
> The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law.
Its is going to vary with copyright law. In the UK the question of computer generated works is addressed by copyright law and the answer is "the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken"
Its also not a simple case of LLM generated vs human authored. How much work did the human do? What creative input was there? How detailed were the prompts?
In jurisdictions where there are doubts about the question, I think code is a tricky one. If the argument that prompts are just instructions to generate code, therefore the code is not covered by copyright, then you could also argue that code is instructions to a compiler to generate code and the resulting binary is not covered by copyright.
The binary should be considered "derived work". Only the original copyright owner has the exclusive right to create or authorize derivative works. Means you are not allowed to compile code unless you have the license to do so. Right?
Yes, so is LLM generated code a derivative work of the prompts? Does it matter how detailed the prompts are? How much the code conforms to what is already written (e.g. writing tests)?
It looks like it will be decided on a case by case basis.
It will also differ between countries, so if you are distributing software internationally what will be a constraint on treating the code as not copyrightable.
It is not "beyond obvious" that a cat cannot have copyright, given the lawsuit about a monkey holding copyright [1], and the way PETA tried to used that case as precedent to establish that any animal can hold copyright.
Thaler v. Perlmutter: The D.C. Circuit Court affirmed in March 2025 that the Copyright Act requires works to be authored "in the first instance by a human being," a ruling the Supreme Court left intact by declining to hear the case in 2026.
Authors and inventors, courts have ruled, means people. Only people. A monkey taking a selfie with your camera doesn't mean you own a copyright. An AI generating code with your computer is likewise, devoid of any copyright protection.
The ruling says that the LLM cannot be the author. It does not say that the human being using the LLM cannot be the author. The ruling was very clear that it did not address whether a human being was the copyright holder because Thaler waived that argument.
the position with a monkey using your camera is similar, and you may or may not hold the copyright depending on what you did - was it pure accident or did you set things up. Opinions on the well known case are mixed: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
Where wildlife photographers deliberately set up a shot to be triggered automatically (e.g. by a bird flying through the focus) they do hold the copyright.
AI generated code has no copyright. And if it DID somehow have copyright, it wouldn't be yours. It would belong to the code it was "trained" on. The code it algorithmically copied. You're trying to have your cake, and eat it too. You could maybe claim your prompts are copyrighted, but that's not what leaked. The AI generated code leaked.
The linked document labeled "Part 2: Copyrightability", section V. "Conclusions" states the following:
> the Copyright Office
concludes that existing legal doctrines are adequate and appropriate to resolve questions of
copyrightability. Copyright law has long adapted to new technology and can enable case-by-
case determinations as to whether AI-generated outputs reflect sufficient human contribution to
warrant copyright protection. As described above, in many circumstances these outputs will be
copyrightable in whole or in part—where AI is used as a tool, and where a human has been able
to determine the expressive elements they contain. Prompts alone, however, at this stage are
unlikely to satisfy those requirements.
So the TL;DR basically implies pure slop within the current guidelines outlined in conclusions is NOT copyrightable. However collaboration with an AI copyrightability is determined on a case by case basis. I will preface this all with the standard IANAL, I could be wrong etc, but with the concluding language using "unlikely" copyrightable for slop it sounds less cut and dry than you imply.
That's typical of this site. I hand you a huge volume of evidence explaining why AI generated work cannot be copyrighted. You search for one scrap of text that seems to support your position even when it does not.
You have no idea how bad this leak is for Anthropic because with the copyright office, you have a DUTY TO DISCLOSE any AI generated work, and it is fully RETROACTIVE. And what is part of this leak? undercover.ts. https://archive.is/S1bKY Where Claude is specifically instructed to HIDE DISCLOSURE of AI generated work.
That's grounds for the copyright office and courts to reject ANY copyright they MIGHT have had a right to. It is one of the WORST things they could have done with regard to copyright.
I merely read the PDF articles you linked, then posted, verbatim, the primary relevant section I could find therein. Nowhere does it say that works involving humans in collaboration with AI can't be copyrighted. The conclusions linked merely state that copyright claims involving AI will be decided on a case by case basis. They MAY reject your claim, they may not. This is all new territory so it will get ironed out in time, however I don't think we've reached full legal consensus on the topic, even when limiting our scope to just US copyright law.
I'm interpreting your most recent reply to me as an implication that I'm taking the conclusions you yourself linked out of context. I'm trying to give the benefit of the doubt here, but the 3 linked PDF documents aren't "a mountain of evidence" supporting your argument. Maybe I missed something in one of those documents (very possible), but the conclusions are not how you imply.
Whether or not a specific git commit message correctly sites Claude usage or not may further muddy the waters more than IP lawyers are comfortable with at this time (and therefore add inherent risk to current and future copyright claims of said works), but those waters were far from crystal clear in the first place.
Again, IANAL, but from my limited layman perspective it does not appear the copyright office plans to, at this moment in time, concisely reject AI collaborated works from copyright.
Your most recent link (Finnegan) is from an IP lawyer consortium that says it's better to include attribution and disclosure of AI to avoid current and future claim rejections. Sounds like basic cover-your-ass lawyer speak, but I could be wrong.
Full disclosure: I primarily use AI (or rather agentic teams) as N sets of new eyeballs on the current problem at hand, to help debug or bounce ideas off of, so I don't really have much skin in this particular game involving direct code contributions spit out by LLMs. Those that have any risk aversion, should probably proceed with caution. I just find the upending of copyright (and many other) norms by GenAI morbidly fascinating.
Currently, the US copyright application process has an AI disclosure requirement for the determination of applicability of submitted works for protections under US copyright law.
The copyright office still holds that human authorship is a core tenet of copyrightability, however, whether or not a submission meets the "de minimis" amount of AI-generated material to uphold a copyright claim is still being decided and refined by the courts and at the moment the distinction appears to fall on whether the AI was used "as a tool" or as "an author itself", with the former covered in certain cases and the latter not.
The registration process makes it clear that failure to disclose submissions in large contribution authored by contractor or ai can result in a rejection of copyright claim now or retroactive on discovery.
You do not apply for copyright. In the US you can, optionally, register a copyright. You do not have to, but it can increase how much you get if you go to court.
I do not know whether any other country even has copyright registration.
Your main point that this is something the courts (or new legislation) will decide is, of course, correct. I am inclined to think this is only a problem for people who are vibe coding. The moment a human contributes to the code that bit is definitely covered by copyright, and unless you can clearly separate out human and AI contributed bits saying the AI written bits are not covered is not going to make a practical difference.
My (limited) understanding was that without formal registration you cannot file any infringement suits against any works protected by said copyright. Then what's the point of the copyright other than getting to use that fancy 'c' superscript?
That comment is spot on. Claude adding a co-author to a commit is documentation to put a clear line between code you wrote and code claude generated which does not qualify for copyright protection.
The damning thing about this leak is the inclusion of undercover.ts. That means Anthropic has now been caught red handed distributing a tool designed to circumvent copyright law.
Anthropic could at least make a compelling case for the copyright.
It becomes legally challenging with regards to ownership if I ever use work equipment for a personal project. If it later takes off they could very well try to claim ownership in its entirety simply because I ran a test once (yes, there's a while silicon valley season for it).
I don't know if they'd win, but Anthropic absolutely would be able to claim the creation of that code was done on their hardware. Obviously we aren't employees of theirs, though we are customers that very likely never read what we agreed to in a signup flow.
Using work equipment for a personal project only matters because you signed a contract giving all of your IP to your employer for anything you did with (or sometimes without) your employer's equipment.
Anthropic's user agreement does not have a similar agreement.
My point was that they could make a compelling case though, not that they would win.
I don't know of ant precedent where the code was literally generated on someone else's system. Its an open question whether that implies any legal right to the work and I could pretty easily see a court accepting the case.
Who owns the copyright for something not written by anybody, you ask? Is it the man who pays to have it written, or the owner of the machine that does the writing? But it is neither. Nobody owns the copyright because nobody has written it.
I'd think the conclusion you should draw is not that "even the famous experiments were not valid, so nothing in psychology is" but rather "the validity of an experiment does not correlate with how famous it is".
A direct conclusion. The insight I'll draw from that is that academia gives voice to the results the current zeitgeist finds interesting and believable without properly verifying the evidence.
Famous experiments are not chosen by academia. They are chosen by non academics. What you usually find is academics being much more reserved and more critical of these then journalists, bloggers or random commenters on HN.
I guess my point is that I don't need to think for long before I find an example justifying why physics is a serious field.
What would be the equivalent of Newton's laws in psychology? Does such a thing exist? Or does the whole field just prove how complicated human beings are by being incapable of proving anything else (which in itself would be an interesting result, don't get me wrong)?
Physics is an exact, quantitative, natural science. Psychology is neither exact, quantitative (usually), nor a natural science. They are not comparable. But like many other fields of study that are not hard sciences, psychology can still be useful and valuable. (Note the "can". Given the replication crisis, how much of psychology actually is I cannot say.)
But do we have an example of something that is provably valuable? I am genuinely interested: after reading this article, I realised that there is nothing I can attribute to psychology off the top of my head.
Indeed, AFAIK neural networks have caused at least two AI winters before finally breaking through thanks to a few good new ideas and the fact that the needs of computer games incidentally led to the development of a big industry of specialized, programmable, high-performance dot product calculators.
Speaking of winters; there's a good article about Cyc, a successor to Automated Mathematician. Cyc was the last big project in symbolic AI: https://yuxi.ml/cyc
Makes sense, given that to birds, optimizing for weight is everything. But seeing that the ridiculously smart border collies have a comparatively low density of neurons, clearly there’s more to intelligence than that.
I don’t even know how you’d compare their intelligence, it’s so apples to oranges.. Most birds build nests so they have an advantage in tool use and that’s what gets them ahead in some tests. On the other hand, have anyone tried to train corvids to herd other birds/animals? I bet BCs will have an advantage there:)
I'm not trying to compare them, just noting an interesting thing in the diagram in the article :) Wrt BC cognition, one notable feat is that some of them are known to have learned the words for hundreds of different objects.
I've not spent significant time with border collies, but I'd say that if I had to rank, multiple species of corvids are smarter than german shepherds (a breed I'm more familiar with).
Problem is contracts mean different things to different people, and that leads standard contracts support being a compromise that makes nobody happy. To some people contracts are something checked at runtime in debug mode and ignored in release mode. To others they’re something rigorous enough to be usable in formal verification. But the latter essentially requires a completely new C++ dialect for writing contract assertions that has no UB, no side effects, and so on. And that’s still not enough as long as C++ itself is completely underspecified.
This contacts was intended to be a minimum viable product that does a little for a few people, but more importantly provides a framework that the people who want everything else can start building off of.
Was required back in the early 2000s already, but that’s not really what the article is about. It’s talking about derived work created by recreating another artist’s existing work in a different medium. Being able to provide WIP material is only evidence that the technical labor is yours, not that the artistic concept is original.
reply