Any specific sections that stick out? Juxt in the past had really great articles, even before LLMs, and know for a fact they don't lack the expertise or knowledge to write for themselves if they wanted and while I haven't completely read this article yet, I'd surprise me if they just let LLMs write articles for them today.
That's just writing. I frequently write like that.
This insistence that certain stylistics patterns are "tell-tale" signs that an article was written by AI makes no sense, particularly when you consider that whatever stylistic ticks an LLM may possess are a result of it being trained on human writing.
My hunch that this is substantially LLM-generated is based on more than that.
In my head it's like a Bayesian classifier, you look at all the sentences and judge whether each is more or less likely to be LLM vs human generated. Then you add prior information like that the author did the research using Claude - which increases the likelihood that they also use Claude for writing.
Maybe your detector just isn't so sensitive (yet) or maybe I'm wrong but I have pretty high confidence at least 10% of sentences were LLM-generated.
Yes, the stylistic patterns exist in human speech but RLHF has increased their frequency. Also, LLM writing has a certain monotonicity that human writing often lacks. Which is not surprising: the machine generates more or less the most likely text in an algorithmic manner. Humans don't. They wrote a few sentences, then get a coffee, sleep, write a few more. That creates more variety than an LLM can.
Here's an alternative way of thinking about this...
Someone probably expended a lot of time and effort planning, thinking about, and writing an interesting article, and then you stroll by and casually accuse them of being a bone idle cheat, with no supporting evidence other than your "sensitive detector" and a bunch of hand-wavy nonsense that adds up to naught.
To start, this is more or less an advertising piece for their product. It's pretty clear that they want to sell you Allium. And that's fine! They are allowed! But even if that was written by a human, they were compensated for it. They didn't expend lots of effort and thinking, it's their job.
More importantly, it's an article about using Claude from a company about using Claude. I think on the balance it's very likely that they would use Claude to write their technical blog posts.
While I agree with the sentiment, using AI to write the final draft of the article isn’t cheating. People may not like it, but it’s more a stylistic preference.
Yeah I agree. Don't tell me you authored something when claude did the majority of the writing. Use claude if you want, but don't pretend you wrote the content when you didn't.
I also hate this style of plastic, pre-digested prose. Its soulless and uninteresting. Maybe I've just read too much AI slop. I associate this writing style with low quality, uninteresting junk.
Yet another way the mere possibility of AI/LLM being involved diminishes the value of ALL text.
If there is constant vigilance on the part of the reader as to how it was created, meaning and value become secondary, a sure path to the death of reading as a joy.
I am reminded of the Simpsons episode in which Principal Skinner tries to pass off the hamburgers from a near-by fast food restaurant for an old family recipe, 'steamed hams,' and his guest's probing into the kitchen mishaps is met with increasingly incredible explanations.
This is a myth. At least one study (Juzek/Ward) has shown that these stylistic patterns do not appear nearly as often either in well-known datasets of English language -- including datasets restricted to specific dialects of English. They don't even appear as often in text generated by raw language models. When they start showing up is after the model has undergone RLHF. Think of the Fermi paradox: if there are all these people who write like AI, then where are they?
AI writing also tends to show these indicators over and over, consistently, over passages of text. It is very hard for humans, even if they are really familiar with AI writing, to be that consistent, and almost impossible for them to be that consistent for more than a sentence or two. Writing a long blog post by hand that is believably "AI-written" takes the amount of purposeful skill you'd need to forge an entire painting or ancient document.
The problem is that people either look for the wrong things, look for obsolete things ("delve" is dead and modern LLMs have killed it), or extrapolate things from indicators that are extremely narrow and specific.
In theory, wouldn't be too hard be to settle the question if whether he used ChatGPT to write it: get Olang to write a few paragraphs by hand, then have people judge (blindly) if it's the same style as the article. Which one sounds more like ChatGPT.
The times I've written articles, and those have gone through multiple rounds of reviews (by humans) with countless edits each time, before it ends up being published, I wonder if I'd pass that test in those cases. Initial drafts with my scattered thoughts usually are very different from the published end results, even without involving multiple reviewers and editors.
There is research showing the contrary that is far more convincing:
> Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization.
One thing you can try⸺admittedly it's not quite correct⸺is replacing them with a two-em dash. I've never seen an AI use one, and it looks pretty funky.
I have nothing against em dashes. As long as your writing is human, experienced readers will be able to tell it's human. Only less experienced ones will use all or nothing rules. Em dashes just increase the likelihood that the text was LLM generated. They aren't proof.
That nuance is lost on the majority of anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.
“An em dash… they’re a witch!”… “it’s not just X, it’s Y… they’re a witch!”
> anti-AI folks who’ve learned they get positive social reactions by declaring essentially everything to be AI written and condemnable.
that's a strawman alright; all the comments complaining how they can't use their writing style without being ganged up on are positive karma from my angle, so I'm not sure the "positive social reactions" are really aligned with your imagination. Or does it only count when it aligns with your persecution complex?
> (The irony that I started with "it's not just" isn't lost on me)
But an LLM wouldn't write "It's not just X, it's the Y and Z". No disrespect to your writing intended, but adding that extra clause adds just the slightest bit of natural slack to the flow of the sentence, whereas everything LLMs generate comes out like marketing copy that's trying to be as punchy and cloying as possible at all times.
I'm starting to develop a physiological response when I recognize AI prose. Just like an overwhelming frustration, as if I'm hearing nails on chalkboard silently inside of my head.
I feel ya.... and i have to admit in the past i tried it for one article in my own blog thinking it might help me to express... tho when i read that post now i dont even like it myself its just not my tone.
therefor decided not gonne use any llm for blogging again and even tho it takes alot more time without (im not a very motivated writer) i prefer to release something that i did rather some llm stuff that i wouldnt read myself.
The AI writing detectors are very unreliable. This is important to mention because they can trigger in the opposite direction (reporting human written text as AI generated) which can result in false accusations.
It’s becoming a problem in schools as teachers start accusing students of cheating based on these detectors or ignore obvious signs of AI use because the detectors don’t trigger on it.
Not sure how I feel about the whole "LLMs learned from human texts, so now the people who helped write human texts are suddenly accused of plagiarizing LLMs" thing yet, but seems backwards so far and like a low quality criticism.
> The specification forces this question on every path through the IMU mode-switching code. A reviewer examining BADEND would see correct, complete cleanup for every resource BADEND was designed to handle.
> The specification approaches from the other direction: starting from LGYRO and asking whether any paths fail to clear it.
> *Tests verify the code as written; a behavioural specification asks what the code is for.*
However this is a blog post about using Claude for XYZ, from an AI company whose tagline is
"AI-assisted engineering that unlocks your organization's potential"
Do you really think they spent the time required to actually write a good article by hand? My guess is that they are unlocking their own organizations potential by having Claude writes the posts.
> Do you really think they spent the time required to actually write a good article by hand?
Given I'm familiar with Juxt since before, used plenty of their Clojure libraries in the past and hanged out with people from Juxt even before LLMs were a thing, yes, I do think they could have spent the time required to both research and write articles like these. Again, won't claim for sure I know how they wrote this specific article, but I'm familiar with Juxt enough to feel relatively confident they could write it.
Juxt is more of a consultancy shop than "AI company", not sure where you got that from, guess their landing page isn't 100% clear what they actually does, but they're at least prominent in the Clojure ecosystem and has been for a decade if not more.
Is it possible for a tool to know if something is AI written with high confidence at all? LLMs can be tuned/instructed to write in an infinite number of styles.
The WikiEDU project has some thoughts on this. They found Pangram good enough to detect LLM usage while teaching editors to make their first Wikipedia edits, at least enough to intervene and nudge the student. They didn’t use it punatively or expect authoritative results however. https://wikiedu.org/blog/2026/01/29/generative-ai-and-wikipe...
They found that Pangram suffers from false positives in non-prose contexts like bibliographies, outlines, formatting, etc. The article does not touch on Pangram’s false negatives.
I personally think it’s an intractable problem, but I do feel pangram gives some useful signal, albeit not reliably.
It's not a shallow dismissal; it's a dismissal for good reason. It's tangential to the topic, but not to HN overall. It's only curmudgeonly if you assume AI-written posts are the inevitable and good future (aka begging the question). I really don't know how it's "sneering", so I won't address that.
The fact that the whole thread has basically devolved into debates over if it is or isn't an LLM written article is proving well enough that it doesn't really matter one way or another
It is a witch hunt with no evidence whatsoever, all based on intuition. It is distraction from the main topic, a topic that enough people find interesting to stay on the top page. What was intellectually interesting has now become a bore fest of repeated back and forth. That’s disrespectful and inconsiderate. Write a new post about why do you think AI writing is dangerous. I don’t mind that. I’d upvote it.
The site guidelines were written pre-AI and stop making sense when you add AI-generated content into the equation.
Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.
> The site guidelines were written pre-AI and stop making sense when you add AI-generated content into the equation.
Note: the guidelines are a living document that contain references to current AI tools.
> Consider that by submitting AI generated content for humans to read, the statement you're making is "I did not consider this worth my time to write, but I believe it's worth your time to read, because your time is worth less than mine". It's an inherently arrogant and unbalanced exchange.
This is something worth saying about a pure slop content. But the "charge" against the current item is that a reader encountered a feeling that an LLM was involved in the production of interesting content.
With enough eyeballs, all prose contains LLM tells.
We don't need to be told every time someone's personal AI detection algorithm flags. It's a cookie-banner comment: no new information for the reader, but a frustratingly predictable obstacle to scroll through.
I disagree. I like to read articles and explore Show HN posts, but in the past 6 months I’ve wasted a lot of time following HN links that looked interesting but turned out to be AI slop. Several Show HN posts lately have taken me to repos that were AI generated plagiarisms of other projects, presented on HN as their own original ideas.
Seeing comments warning about the AI content of a link is helpful to let others know what they’re getting into when they click the link.
For this article the accusations are not about slop (which will waste your time) but about tell-tell signs of AI tone. The content is interesting but you know someone has been doing heavy AI polishing, which gives articles a laborious tone and has a tendency to produce a lot of words around a smaller amount of content (in other words, you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in)
Being able to share this information is important when discussing links. I find it much more helpful than the comments that appear criticizing color schemes, font choices, or that the page doesn’t work with JavaScript disabled.
> you’re reading an AI expansion of someone’s smaller prompt, which contained the original info you’re interested in
This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.
> This got me thinking: what if LLMs are used to do the opposite? To condense a long prompt into a short article? That takes more work but might make the outcome more enjoyable as it contains more information.
You're fighting an uphill battle against the inherent tendency to produce more and longer text. There's also the regression to the mean problem, so you get less information (and more generic) even though the text is shorter.
Yes. These HN guidlines already basically cover it:
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Its not a person's work. It reads like an LLM's work. If you can't be bothered to write an article yourself, its incredibly arrogant to ask me to read it.
Speaking of the HN guidelines, they also say this:
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
> Yes. These HN guidlines already basically cover it:
>> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
>> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
HN has gotten to the point where it’s not even worth clicking the link because of course it’s ai slop.
There is some real content in the haystack, but we almost need some kind of curator to find and display it rather than a vote system where most people vote on the title alone.
If you’re looking for a place that surfaces only human-written content regardless of whether it’s interesting, rather than interesting content regardless of how it was written, HN is not the place.
There might be a market for your alternative though. Should be easy enough to build with Claude Code.
If the content was interesting, the author would've written about it himself.
By asking AI to write the article for you, you're asserting that the subject matter is not interesting enough to be worth your time to write, so why would it be worth my time to read?
I know the author personally. He's hardly the type of person to publish AI slop. Read his other articles and watch his talks, this is very much Henry's literary style.