LLMs have zero metacognition. Don't be fooled - their output is stochastic inference and they have no self-awareness. The best you'll see is an improvised post-hoc rationalization story.
You can turn all these argents around and prove the same is true for humans. Don't be fooled by dogmatic people who spread the idea that the human mind is the pinnacle of cognition in the universe. Best to leave that to religion.
> The best you'll see is an improvised post-hoc rationalization story.
Funny, because "post-hoc rationalization" is how many neuroscientists think humans operate.
That LLMs are stochastic inference engines is obvious by construction, but you skipped the step where you proved that human thoughts and self-awareness is not reducible to stochastic inference.
Yep. If you ask Claude to create a drop-in replacement for an open-source project that passes 100% of the test suite of the project, it will basically plagiarize the project wholesale, even if you changed some of the requirements.
Of the 45 delegates to the continental congress, only two (Benjamin Franklin and another) were known to be deists. One's membership records couldn't be found. The other 42 were active members and on the books in their churches.[0]
Jefferson also was a deist, but he wasn't present at the constitutional convention of 1787 (though he earlier authored the Declaration of Independence).
[0] M. E. Bradford. Founding Fathers: Brief Lives of the Framers of the United States Constitution, second edition. University Press of Kansas, 1994.
These "AI rewrite" projects are beginning to grate on me.
Sure, if you have a complete test suite for a library or CLI tool, it is possible to prompt Claude Opus 4.6 such that it creates a 100% passing, "more performant", drop-in replacement. However, if the original package is in its training data, it's very likely to plagiarize the original source.
Also, who actually wants to use or maintain a large project that no one understands and that doesn't have a documented history of thoughtful architectural decisions and the context behind them? No matter how tightly you structure AI work, probabilistic LLM logorrhea cannot reliably adopt or make high-level decisions/principles, apply them, or update them as new data arrives. If you think otherwise, you're believing an illusion - truly.
A large software project's source code and documentation are the empirical ground-truth encoding of a ton of decisions made by many individuals and teams -- decisions that need to be remembered, understood, and reconsidered in light of new information. AI has no ability to consider these types of decisions and their accompanying context, whether they are past, present, or future -- and is not really able to coherently communicate them in a way that can be trusted to be accurate.
That's why I can't and won't trust fully AI-written software beyond small one-off-type tools until AI gains two fundamentally new capabilities:
(1) logical reasoning that can weigh tradeoffs and make accountable decisions in terms of ground-truth principles accurately applied to present circumstances, and
(2) ability to update those ground-truth principles coherently and accurately based on new, experiential information -- this is real "learning"
> Sure, if you have a complete test suite for a library or CLI tool
And this is a huge "if". Having 100% test coverage does not mean you've accounted for every possible edge or corner case. Additionally, there's no guarantee that every bugfix implemented adequate test coverage to ensure the bug doesn't get reintroduced. Finally, there are plenty of poorly written tests out there that increase the test coverage without actually testing anything.
This is why any sort of big rewrite carries some level of risk. Tests certainly help mitigate this risk, but you can never be 100% sure that your big rewrite didn't introduce new problems. This is why code reviews are important, especially if the code was AI generated.
You raised very good points, however, what you typed negatively affects the shell game (as to what "AI" companies are often really doing) and partial pyramid scheme.
People seem not to realize that AI companies can not only plagiarize someone's original source code, but any source code that people connected to it are feeding and uploading to it. The shell game is taking Tom's code (with a few changes) and feeding it to Bill (based on prompts given). Both Tom and Bill are paying fees to the AI company, yet don't realize their code (along with many others) can be spit back at them.
You, the humans, are doing a lot of the work, and many don't realize it. Because Tom is not realizing someone has or is working on something similar. The AI company is connecting Tom and Bill together, without either of them realizing it. If they type in the right prompt, the search then feeds back that info. It's not the only thing going on or only way things work, but it is part of it, that is often not publicly acknowledged.
OpenAI definitely has used input tokens to further train its models, but Anthropic has emphatically stated they do no such thing. I have trusted them so far on that. Are you saying they're lying?
I'm not going against any explicit policy or promise to customers that a particular AI company might make, but rather what is and can be happening that a lot of the public doesn't realize in general. A lot of what is attributed to AI, can be the work of humans (including customers), that in various cases were or arguably being ripped-off. Speaking of which, there are lots of cases of companies claiming to use or have an AI product, but instead were just using humans for low pay (but wasn't previously referring to that).
In the Tom and Bill shell game example given, where they are being used for their code and to correct code that is sold to other customers, it's not a "now" thing either. Meaning Tom, Bill, and the other customers don't have to be exchanging code in real time, when that code is being uploaded, saved, and trained on by AI companies. Tom could have worked on some code a month ago, that was slurped up from Susan. Tom fixed many of the errors of Susan's code, which is now fed to Bill, when he inputs the correct prompts. Bill thinks the AI is the "genius", but is unknowingly benefiting from Bill's and Susan's work, review, and corrections. Potentially more devastating to Bill, is what he may mistakenly think was private or secret to only him, is fed to other customers for profit.
AI and their companies are also connecting people, in that indirect black box way, where those people may not realize they are connected, being fed, and are correcting each others code. Yeah, some may not care where the code comes from or how, but that they can use it for their personal purposes. Sure, that's not the only part of the story and LLMs are doing some interesting and amazing things, but there is another part of that story that is not being more widely acknowledged. In a similar way in which has angered so many artists and authors, where they feel aggrieved and taken advantage of; relative to many art, song, and book lawsuits.
> Sure, if you have a complete test suite for a library or CLI tool, it is possible to prompt Claude Opus 4.6 such that it creates a 100% passing, "more performant", drop-in replacement.
This was the "validation" used for determining how much progress was made at a given point in time. Re training data concerns, this was done and shipped to be open source (under GPLv2) so there's no abuse of open source work here imo
Re the tradeoffs you highlight - these are absolutely true and fair. I don't expect or want anyone to just use ziggit because it's new. The places where there performance gains (ie internally with `bun install` or as a better WASM binary alternative) are places that I do have interest or use in myself
_However_, if I could interest you in one thing. ziggit when compiled into a release build on my arm-based Mac, showed 4-10x faster performance than git's CLI for the core workflows I use in my git development
I suppose "Project X has been used productively by Y developers for Z amount of time" is a decent-enough endorsement (in this case, ziggit used by you).
But after the massive one-off rewrite, what are the chances that (a) humans will want to do any personal effort on reading it, documenting it, understanding it, etc., or that (b) future work by either agents or humans is going to be consistently high-quality?
Beyond a certain level of complexity, "high-quality work" is not just about where a codebase is right now, it's where it's going and how much its maintainers can be trusted to keep it moving in the right direction - a trust that only people with a name, reputation, observable values/commitments, and track record can earn.
Perhaps there's a future where "add a new feature" means "add tests for that feature and re-implement the whole project from scratch in AI".
But that approach would create significant instability. You can't write tests that will cover every possible edge case. That's why good thinking & coding, not good tests, is the foundation of good software.
> the latest wifi drivers for my brand new wifi 7 motherboard were too flaky
A GL.iNet travel router in WiFi to ethernet bridge mode is an excellent stopgap until Linux support arrives. It also has the benefits of (a) taking with you on trips for safer/easier internet use (use your home SSID, even auto-VPN traffic if you want) and (b) letting you plug in other wired-only devices adjacent to the computer.
You're right actually. Exa's MCP server is stateless, just a REST wrapper. A skill + CLI would do the same job with way less context cost. Someone already built that (https://github.com/tobalsan/exa).
Same here. It's not airtight, the agent could technically read the wrapper or env vars, but in practice it doesn't bother. Good enough for most setups.
TI-83 Basic was the first programming language I really felt like I had mastered. For a while in my first CS college class I was writing code in TI basic and translating it to C++. Drugwars and Bowling were the two really impressive games written in TI-Basic.
But discovering z80 assembly was like magic. It was incredibly exciting to go to my dad's office at the university where he worked (where computers had 2 T1 internet lines) to download and try assembly games when they first burst on the scene (I was in 8th grade). Bill Nagel blew my mind with Turbo Breakout and Snake, and later AShell, Penguins, and grayscale Mario... but the best executed and most replayable games I think were Sqrxz and ZTetris on the TI-86 by Jimmy Mardell. Honorable mention to Galaxian and Falldown. I once downloaded the z80 assembly source for a game, printed it to about an inch of paper, and carried it around for weeks trying to understand it...
It was also really cool for some reason (and would often brick the calculator until you took the batteries out) to type random hex pairs into a program and execute it as assembly. "C063" run as assembly - syntax was the random looking Send(9PrgmA where PrgmA is where you typed the hex code - on a TI-83 would scroll tons of random text in an infinite loop.
Does anyone remember the TI website wars? TI Files (later TI Philes) was "so much more awesome" than "the lowly weak ticalc.org"... but look which one is still around :-)
I'm amazed ticalc.org is still alive and kicking. So much nostalgia. Joltima was what convinced me to learn assembly. So far ahead of its time on the TI-86. Full featured RPG with turn-based combat on a graphing calc. Glad the history is still accessible online.
reply