This is a lovely bit of writing, and really points to the value of constraints.
Some of my favorite childhood memories were being at friends' houses, huddled over the computer, playing Space Quest or Zork. At one of my friends' houses, we were aware that Leisure Suit Larry was installed, and curious, but never played it because of the central location of the machine.
I think the shift we've seen TV is something similar. When I was a kid, TV was viewed as an antisocial medium ("the boob tube"), but I have really fond memories of sitting with my family watching Quantum Leap or Growing Pains. Now that everyone has their own screen to watch TV, it seems the studios don't even bother trying to make shows that appeal to an entire family.
We focus so much on the media (tv/internet/video games/books) when ascribing value, but, as this article indicates, the physical nature of the delivery (shared living room appliance vs portable individual screen) makes a huge difference.
Edit - This is not the first time I'm observing this. Could somebody explain to me why the comments which point out the discussed texts are AI generated are being frequently downvoted on Hacker News?
I don't downvote those comments, even though I have a serious problem them.
These comments are little more than a witch hunt. It is clear from the language being used: "it's obvious", rather than providing evidence. When people do provide evidence, it is in terms of "tells". In other words, bits of overused grammar that are common in LLM generated texts yet also exist in human written texts. It doesn't really prove their claims. Worse yet, it is also next to impossible to defend one's self from such claims.
None of this means that I want to spend my days reading LLM generated articles. I believe they pollute the Internet with yet another source of hollow writing. (LLMs are not the only guilty party here. Plenty of flesh and blood humans do the same.) They also further necessitate the use of LLMs for what I think is the one legitimate use, which is research. (Before you attack LLMs for hallucinating, it is worth noting that many, if not most, of the articles written by people demonstrate the same.) Finally, if I am interested in the output of an LLM, I would rather do it myself. At least then I would know what I am getting is LLM generated, rather than a misrepresentation. Plus it is easier to dig deeper if something seems to be out of kilter, either through further prompting or requesting sources.
Yet all of my distaste for LLM generated articles does not outweigh my distaste for the witch hunt.
I think your comment was maybe downvoted for being so terse and dismissive.
But you're right that it is anything but a good piece of writing and it is genuinely strange to see people act otherwise.
> That kind of furniture organized more than just objects. It organized a relationship with technology. It suggested that the computer (and with it, the internet) was something used under particular conditions: seated, in that spot, for a certain amount of time. Something that was switched on and off, opened and closed.
It's making a nice point and one that I'm sure most of the people here do find appealing, it's an idea that I relate to myself. But the words used to make that point are bordering on nonsense.
> I think your comment was maybe downvoted for being so terse and dismissive.
Yes, I know, but what motivated me to ask was that from my observations also less derisive comments raising AI point are prone to being downvoted. Like this comment I linked to was 'just asking a question'. And I saw others being more pleasant, with no different results.
Usually the LLM generated texts they are reacting to aren't IMO worthwhile - like in this case. Idk, I feel very surprised by how accepting of them others here seem to be (if measured by points system).
Yes, I get you, I recognised that in my final paragraph. But it would better called writing that "makes a lovely point" rather than "lovely writing" if _the prose isn't good" and it reads like _slop_.
I downvote them because they are tangential to the content. They are like complaints about scroll bars and back button hijacking, or annoyances about the website's color scheme. Valid complaints, but contrary to the HN guideline:
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
I don't like AI slop articles either, but I also don't like articles where the text is formatted in a tiny column in the middle of the browser. Neither are really useful to complain about here. By the end of 2026, 90% of the articles here are probably going to be AI slop, and it will be totally useless to complain about each and every one of them.
I want to respect the guidelines for the good of the community, but at this point it isn't serving the community well for there to not be backlash against the rising flood of AI-generated garbage.
It was really truly bad enough when it was ~half the articles either being about AI directly or indirectly. Now it's that, plus half of it is written by Claude too.
What meaningful community is going to be left for these guidelines to protect?
Moderation needs to put their foot down in some cases, as a matter of necessity. Sometimes users need to put their foot down, too.
I'm all for banning AI slop articles. The HN guidelines were recently updated to address slop comments[1], but they have not put their foot down yet about slop articles.
> I downvote them because they are tangential to the content. They are like complaints about scroll bars and back button hijacking, or annoyances about the website's color scheme
I don't agree with you. They are not at all like the examples you mentioned. Calling something "AI slop" signals that the writing either fails to raise any important point or, even when it raises a decent point, it is so repetitive it wastes time of readers. This is not only a style problem.
To put it in LLMish: It's not 'tangential to the content' – it's directly addressing (the lack of) the content.
If LLM worked perfectly we shouldn't have noticed the text was generated. I and others did. I feel it's important to point it out, if we don't want low-quality texts fully flooding us.
> By the end of 2026, 90% of the articles here are probably going to be AI slop, and it will be totally useless to complain about each and every one of them.
Using the policy you personally adopted it surely will be so. I don't think news aggregator comprised of junk information is something which should be embraced, so maybe reconsider your position?
I think HN is just fucked. A lot of people either genuinely don't see the problem with having a bunch of AI-generated slop garbage on the frontpage, or they are themselves posting it so they have a personal stake in not seeing anything wrong with it.
Don't be too surprised: there are literally comments that are just blatantly written by Claude on HN, which seem to be coming from human accounts that predate Claude. Which means that there are people here who, in trying to respond, actually ask Claude to basically do it for them. I find this utterly stunning and honestly, truly alarming. Even if the person behind the keyboard is technically alive, what exactly are they becoming? Are they even going to think for themselves, or will they just ask Claude what they're supposed to think from now on?
And as much as HN moderation has been genuinely pretty great at keeping the community under control with a relatively light touch, it's already too late. Dang and friends needed to do something much sooner, and they didn't. It literally doesn't matter what they do now, so there's no point in bugging them, not that I expect they would be interested in listening anyway.
I'm not going to make a lot of dramatic "I'm leaving Twitter" type comments, but I'm losing respect for HN's rules and guidelines the more I see this page overran with literal CRAP. And just so I can make my opinion clear, it's not crap because it's AI generated, it's crap because I can tell it's AI generated, full of fluff, cliches and a lack of substance.
It says a lot about the taste of the average person voting on HN that this is what we get now, and it fucking sucks because I don't really like any of the competing news aggregators either. I actually had to log in to post this comment because lately I've been staying logged out of HN and visiting less frequently now that I'm not sure what I get out of it.
At least I won't miss HN when the internet becomes an inaccessible hellscape in part due to AI crap outnumbering human posts 1000:1 and in part due to horrible legislation screaming ahead at breakneck speeds with literally no opposition from anybody.
Intelligence for HN posters is like boobs for strippers: everyone knows that bigger is better when it comes to the attention they seek, so if they are lacking, or feel inadequate in that department, they seek augmentation which anyone can tell is fake but seems to get the job done.
how would you solve this problem? with AI detecting AI at scale here and killing posts? I do get what you are saying but I am wondering what would you do if you were put in charge of tackling this problem today?
The side that faces your wrist is rounded - only the face is sharp. I haven't noticed any issues with the edge wearing the thing.
I was worried about scratches because I abuse the shit out of anything I wear, and sure enough, there are scratches in the titanium bezel, but they look good in a way that scratches on my (non-pro) steel Apple Watch did not.
No because the people who make car parts aren't promising to kill my livelihood and everyone else's.
The people who make car parts aren't telling me that the cars they build are likely to murder everyone I love.
The people who make car parts aren't writing long screeds about how if our dysfunctional government doesn't step up to implement a solution to the problems created by all the car parts, we're going to to see mass poverty and social chaos.
(To be fair, I don't believe all these forecasts by AI companies, but when they're making them, why on earth would I support letting them go about their business?)
Right? Have any of the execs making these decisions ever ridden in an EV? They are so much better that the experience I've seen is no one will ever go back to preferring ICE after spending time with an EV. My family currently has 2 ICE vehicles (one is a PHEV). I really doubt we'll buy another.
The week I spent renting an EV (an Ioniq 5, so not even a high-end one) convinced me. Enjoyable to drive. Having to figure out where/how to charge it was sufficient to chase away the fears around that.
> I have a secret fear about AI - that at one point when AI models get good enough, AI companies will no longer give you the source these tools generate - you'll get the artifacts (perhaps hosted on a subscription website), but you won't get the code.
This is a likelier outcome than the various utopian promises (no more cancer!) that AI boosters have been making.
> AI as it is being developed is likely to centralize it
Depends on how you see it.
I know many people building oss, local alternatives to enterprise software for specific industries that cost thousands of dollars all thanks to AI.
If everyone can produce software now and at a much complex and bigger scale, it's much easier to create decentralized and free alternatives to long-standing closed projects.
You do understand that the above comment is talking about how the use and reliance on LLMs is what centralizes power right? It's great people can build these tools, but if the means to build these tools are controlled by three central companies where does that leave us?
I agree with you. One counterargument is that producing software was never a path to adoption unless you had distribution and the big companies (OpenAI, Anthropic) have distribution on a scale that individuals will not.
> - OSS is valuable for decentralizing power and influence
That was the intention and hope, but I think the past twenty years has shown that it largely had the opposite effect.
Let's say I write some useful library and open source it.
Joe Small Business Owner uses it in his application. It makes his app more useful and he makes an extra $100,000 from his 1,000 users.
Meanwhile Alice Giant Corporate CEO uses it in her application. It makes her app more useful by exactly the same amount, but because she has a million users, now she's a billion dollars richer.
If you assume that open source provides additive value, then giving it to everyone freely will generally have an equalizing effect. Those with the least existing wealth will find that additive value more impactful than someone who is already rich. Giving a poor person $10,000 can change their life. Give it to Jeff Bezos and it won't even change his dinner plans.
But if you consider that open source provides multiplicative value, then giving it to everyone is effectively a force multiplier for their existing power.
In practice, it's probably somewhere between the two. But when you consider how highly iterative systems are, even a slight multiplicative effect means that over time it's mostly enriching the rich.
Seven of the ten richest people in the world got there from tech [1]. If the goal of open source was to lead to less inequality, it's clearly not working, or at least not working well enough to counter other forces trending towards inequality.
> AI as it is being developed is likely to centralize it
The access to AI is centralized, but the ability to generate code and customized tools on demand for whatever personal project you have certainly democratizes Software.
And even though open source models are a year behind, they address your remaining criticism about the AI being centralized.
That's a fair question, so I'll try as best I can. And maybe this will serve as a meta-example for me because it is hard to explain.
In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.
And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".
I get what you are saying, but I disagree on the last part, "[...] way to convince you they are your own". If it managed to convince the author that it is their own, chances are, it is their own. Specially so if the author does review and edit the output prior to posting it.
The messiness may show glimpses of the process, but, in isolation, will likely distort and corrupt the desired message via partial framing.
> And it gets worse because LLMs destroy that signal in one direction - towards homogeneity.
Oh, right, yes, if you're not careful they can definitely do that.
But look at what julius_eth_dev is actually saying they're doing:
> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."
That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.
I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)
I've wondered the same. Back when Antrhopic seemed like a niche alternative to OpenAI, I signed up for an account. Now that my company is using it heavily, I tried to change the account owner to on of the executives, and apparently that's not possible! It's also not possible to create separate work/personal accounts unless you have two different phone numbers.
There's a confusing disconnect between "we have this magic box that can write all the software we'd ever want" and their lack of basic account management functionality.
(Not really. That disconnect is because of something mature software engineers have known for decades - the bottleneck has never been the code)
I've also had success with Mountain Hardware, Outdoor Research (jackets and pants).
(I do search and rescue, so a lot of focus on outdoor stuff. It is also really hard on gear so anything cheaply made gets destroyed pretty quickly.)
reply