This is very unfortunate and I'm sorry to hear that the author has been excluded and is suffering to this extent.
On another note though:
"This isn't paranoia—it's pattern recognition honed by lived experience."
I can't stop seeing the LLM verbiage everywhere I look. I feel like once you recognize the repeated syntax that got RLHF'd into all of these models you never stop seeing it. Maybe everyone is learning those patterns from reading AI-generated language now too.
> I can't stop seeing the LLM verbiage everywhere I look. I feel like once you recognize the repeated syntax that got RLHF'd into all of these models you never stop seeing it.
If you write concise but with some panache, people will think you LLMed it. I've had the accusation leveled at me with rising frequency since ~2023.
It really sucks. In a similar vein, before LLMs my friends always used to call me "the walking Wikipedia" because I tend to always have a fact or trick or trivia in my back pocket. These days more often than not, I get told "okay and now for a non-AI ass answer".
I completely understand why people have that reflexive response to it, but it also feels really vile.
For what it's worth, I do also notice it. Especially on Reddit, you'll start reading a comment and halfway through your gut feel tells you it's likely written by an LLM.
I do also get that pretty often. Also because I've always liked using hyphens and emdashes, so I get that as well. I don't know if I'd call it "vile" to notice a common pattern like the above though.
But it does have a certain code smell sometimes, I often get it on Reddit posts as well.
Ah, I should have perhaps formulated it better. What I meant with vile is the sensation of having put effort in to write pleasantly only to then have the effort misattributed to a machine and to be seen like a hack.
It's as if you reaped a field by hand for the skill of it and to then have everyone's first remark be "well, you certainly know how to operate a combine. Now show me some real effort!"
Less serious, but this reminds me of how before stable diffusion was consistently useful, there were artists who made a sizeable Patreon income on anime characters in 'realistic' style. Unfortunately that seems to have been one of the styles that got trained into models the best and now their good work is associated with cheap looking art, and not through any fault of their own.
All the way down this kind of genai has weird impacts I guess.
That's something I find very interesting, honestly. I think the two way nature of the relationship between the impacts of a tool on humans and the impact of humans on the way the tool develops is a particularly weird little phenomenon that exists these days. It's overall fascinating.
Yeah, I also kind of wish he published the blog in his own words instead of using what seems to be LLM polishing. It seems ironically like the same defensiveness and lack of transparency he wants to avoid?
As someone who is autistic and bipolar, I've been told my words often come across as unnecessarily aggressive even though I mean them in a neutral way, so I sometimes use LLMs to try and mediate my tone when I'm talking to an unknown audience. I suspect the author has been told he is acerbic and might be self-conscious of using his own words.
I would definitely believe that, yeah. I just think that reflects something sad about the culture we're in, especially in an essay about struggling with transparency. I would like to get more of a sense of his emotions and writing style, but maybe that would have consequences in a public post and he's more guarded about that.
It's definitely not slop, but there's something kind of sad about a very personal and emotional account needing to be fed through the corpo speak grammar checker? At least in my opinion. It feels like if a friend wanted to vent to me on discord but didn't trust me enough to do it without editing their post for hours, or something.
> there's something kind of sad about a very personal and emotional account needing to be fed through the corpo speak grammar checker?
There is! But it of course follows the content, which is about how the author's emotions have caused him professional harm. Someone who's been through that would be shy about sharing their raw words.
There's a very large range of situations (many highly personal) for which I'd consider an AI writing assist to be super valid. They may have dyslexia. English may not be their first language. They may have brilliant ideas but otherwise struggle with writing. They may simply be highly invested in making sure they are expressing themselves clearly. They may be too physically or mentally tired to polish their writing "manually." They may be under unfortunate time constraints. Maybe the guy has kids to take care of, or just needed to finish his blog post by a certain time so he could get a good night's sleep.
The list of possible reasons is so vast and varied that I coalesce this down into essentially "I don't care and I don't need to know. As long as the end result is an honest representation of the author's intent, i.e. is not AI slop"
It feels like if a friend wanted to vent to me on discord
but didn't trust me enough to do it without editing their
post for hours, or something.
I'm being pedantic but as a good comparison. A friend DM'ing you is explicitly asking you to spend your time reading the message. Additionally you and your friend have preexisting context and rapport. If they don't express themselves in anal-retentive detail clearly in the DM, you can fill in the gaps with your shared context, a luxury that a person writing for the public doesn't have. (again, pedantic on my part, I know!)
I thought the same exact thing. The "proper" em dash (or is that an en dash) as opposed to two dashes "--" is a typical giveaway, it seems to me.
However, the article in general certainly reads like it is coming from the author's own voice. Painfully so, even, because this guy is clearly suffering.
The em dash existed long before LLMs. The fact that 95% of people don't use dashes properly doesn't mean that every single person who uses an em dash is relying on an LLM.
Yeah, it's clear he's going through a lot of pain. Even just going through that many job changes outside of the other events sounds painful and difficult to deal with.
Slight variations on "This isn't X—it's Y." have been popping up all over the place, almost definitely because it's a pattern that ChatGPT has been tuned to (over-) use.
pops up multiple times, too. One or two, sure maybe it's just a reflection of using LLMs often, but this many suggests that the article was (atleast) re-written by an LLM
I use Copilot to re-write emails all the time. I'm not going to act like I'm above it. I will say, it makes your emotional plea ring a little more hollow than it should, but so does posting it online, in text form anyway.
> This isn't job-hopping by choice—it's a survival pattern forced by systematic exclusion.
> This isn't paranoia—it's pattern recognition honed by lived experience.
> The discrimination I'm documenting isn't just about hurt feelings or career setbacks—it has life-and-death consequences for people with schizoaffective disorder:
> These aren't abstract statistics—they represent the human cost of the systematic exclusion I've experienced. (little looser here, but still fits the bill)
> The pattern of discrimination I've experienced isn't unique—it's systematic.
> The discrimination I've faced isn't my fault—it's a reflection of society's failure to move beyond tokenistic awareness toward genuine inclusion.
Earlier today, I read a news article about how a historic 100-year-old family-run farm in my state is closing, but the town is buying the land and supposedly keeping it as farmland. The mayor of the town released a statement that included the sentence "This isn’t just a transaction — it’s a testament to our shared values and vision for the future."
It seems we live in a society where our elected officials can't even be bothered to have a hired PR person write their vacuous statements, let alone writing them personally on their own. A vision for the future indeed...
It's not just the dash, it's the specific construct that's an AI smell, as directly mentioned upthread in the comment I was responding to:
> em dash in the middle of a "it's not just x, it's y" phrasing
Literally all of the sibling comments in this subthread are about this specific phrasing, which AI overuses to an absurd degree, especially when combined with hyperbole.
It's very common to see the particular syntactic structure of restating a point in the following general manner from Claude/ChatGPT in my experience and that of others:
"It's not X -- it's Y." or "This isn't just X -- It's actually Y."
Usually with an emdash there as well for the separation. As I said it's very plausibly becoming more common among people not using LLM-assisted writing too, just from seeing the stylistic approach used more often and having it spread naturally, but I do have been seeing it spread with dramatic speed over the last couple years. I even catch myself using other phrasing more often from reading it more. I think it's just part of how language spreads, honestly.
Interesting, thanks. I've always been a fairly "heavy" (vs other people) user of the emdash after a high school english teacher made us use one in every paper to learn how they worked (along with a colon), and I've been a fan ever sense.
The "it's not ... it's" phrasing though definitely stands out as a bit odd when repeated.
Yeah, I also tend to have heavier usage of them. I'm not exactly sure why I do though, I don't have a particular incident like yours in high school. I think I just read too many blog posts as a teenager, haha.
It is a bit of an odd repetition, right? I wonder if anyone has done analysis on usage of that construction by year.
>I wonder if anyone has done analysis on usage of that construction by year.
This just hit the front page of HN for like an hour or two today. Not that exact construction (It's not just x, -it's y) but this suggests that (English) speakers are starting to use 'AI Buzzwords' in speech. (Words like delve, intricate, etc.)
I think it's safe to extrapolate that the construction would also start to appear more often in human-written and spoken content as well, but I'm sure there's other factors at play.
"This is very unfortunate and I'm sorry to hear that the author has been excluded and is suffering to this extent."
I can't stop seeing this LLM verbiage everywhere I look. I feel like once you recognize the repeated syntax that got RLHF'd into all of these models you never stop seeing it. Maybe everyone is learning those patterns from reading AI-generated language now too.
Very snarky, but okay, haha. Do you not get that sort of sense at all from the above structure? I'm curious if it's just in my head and what the perspectives of others are.
You can do it after, remove all commas and newlines. Make it all a huge continuous blob of text that is hard to read. Definitely not LLM written.
A lot of this here imo is just llm paranoia. I do not think the style of the author is different than their pre-llm posts. One em dash is not proof of anything imo as long as the text in general does not smell as much like llm-generated. I could be wrong because I do not interact much with llms lately, but it does not seem that to me.
It's more like 4 or 5 uses of it, and some of the phrasing choices also have that vibe. Everyone's threshold for the smell will be different, of course.
It seems to me like it's more effort to write something and then have an llm clean it up instead of just posting it so I mostly don't understand the behavior to start with. Why are we going through this new effort?
Until around a year or so ago, I could see and experience that I give some text of mine to an LLM and instantly it may often (not always) feel more structured and well written. I would think that using LLMs for "improving" my style made my text better. I no longer think it actually does. For one, the feeling that everything has the same style is unnerving. And we are getting more sensitive in distinguishing this kind of style, and together with the parts of it that are not actually as nice, or they are too much like polite "corpo" speech, and these stand out much more.
Who would have thought that losing individual styles would make us feel so bored.
Personally I was expecting it to get more diverse since there are a number of companies producing the models but it seems to have converged? Are they all cheating off OpenAI Is there some fundamental linguistic reason for that?
It seems super strange to me. At the very least I thought they'd try to RL other personalities to make it harder to catch. The older base models definitely could handle that.
Yeah I noticed the same, in the beginning they seemed to have different "personalities" as in ways of interacting with users. Maybe it is just a design related thing, same as all commercial websites ending up being the same, as the users know what to expect and how to interact.
> I can't stop seeing the LLM verbiage everywhere I look.
emdashes have been pretty popular among net native folks for the last 20+ years, e.g. if you were to look back at the most popular Kuro5hin stories from ~2001 - 2005, you'd see them everywhere. People just aren't used to the average person being able to write well, so it looks weird to them.
> I'm more talking about the particular contrastive structure.
But that's one of the main uses of emdashes, for signaling that the second half of the sentence is more important than the first half. If you Ctrl-F for emdashes on my blog, you can see I do it everywhere, even though all of my posts were written before LLMs existed:
On another note though:
"This isn't paranoia—it's pattern recognition honed by lived experience."
I can't stop seeing the LLM verbiage everywhere I look. I feel like once you recognize the repeated syntax that got RLHF'd into all of these models you never stop seeing it. Maybe everyone is learning those patterns from reading AI-generated language now too.