Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Respectfully, I think you're missing the point that this is a societal rather than an individual concern. What will the average person's response to AI be? Probably to not recognize it, let alone spurn it. The cumulative effects of your neighbors, particularly the young ones who will grow up amidst this, or the old and gullible, being led along by computers over years is the thing you need to be more concerned about.


Sure, and there are people who stuff themselves full of fast food, alcohol, and/or cigarettes. I get that those things are different in that it is possible to levy vice taxes on them, but the primary defense is and will be education.

What we can do as technologists is establish clear norms around information junk food for our children and close acquaintances, and influence others to do the same.

It's not going to happen overnight -- as with many such things, I expect it'll take decades of mistakes followed by decades of repairing them. What we've learned from other such mistakes is that saying "feel bad about the dumb thing" ("be worried") is less effective than "here's a smart thing you can do instead".


I’m not sure education or awareness is a solution. It doesn’t hurt, of course, but I think the real issue is that we’re frequently feeling “low energy” (for my lack of a better term) so entry barriers become important and least-effort options start to win (“just picking a phone/tablet” easily wins here most of time), even if were well aware that they’re not as rewarding.

I blame all the background stress and I think it’s a more important factor.


When I look at the state of how humans have manipulated each other, how the media is noxious propaganda, how businesses have perfected emotional and psychological manipulation of us to sell us crap and control our opinions, I don't think AI's influence is worse. In fact I think it's better. When I have a spicy political opinion, I can either go get validated in an echo chamber like reddit or newsmedia, or let ChatGPT tell me I'm a f'n idiot and spell out a much more rational take.

Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.


ChatGPT has been shown to spend much more time validating people's poor ideas than it does refuting them, even in cases where specific guardrails have supposedly been implemented, such as to avoid encouraging self-harm. See recent articles about AI usage inducing god-complexes and psychoses, for instance[1]. Validation of the user giving the prompt is what it's designed to do, after all. AI seems to be objectively worse for humanity than what we've had before it.

[1]: https://www.psychologytoday.com/us/blog/urban-survival/20250...


Strongly disagree, and you've misread what you've linked. These linked cases are situations where people are staying in one chat and posting thousands and thousands of replies into a single context, diluting the system prompt and creating a fever-dream of hallucination and psychosis. These are also rarely thinking and tool calling models, relying more on raw-LLM generation instead of thinking and sourcing (cheap/free models versus high powered subscriber only thinking models).

As we all know, the longer the context, the worse the reply. I strongly recommend you delete your context frequently and never stay in one chat.

What I'm talking about is using fresh chat for questions about the world, often political questions. Grab statistics on something and walk through major arguments for and against an idea.

If you think ChatGPT is providing worse answers than X.com and reddit.com for political questions, quite frankly, you've never used it before.

Try it out. Go to reddit.com/r/politics and find a +5,000 comment about something, or go to x.com and find the latest elon conspiracy, and run it by ChatGPT 5-thinking-high.

I guarantee you ChatGPT will provide something far more intellectual, grounded, sourced and fair than what you're seeing elsewhere.


Why would an LLM give you a more "rational take"? It's got access to a treasure trove of kooky ideas from Reddit, YouTube comments, various manifestos, etc etc. If you'd like to believe a terrible idea, an LLM can probably provide all of the most persuasive arguments.


Apologies, it sounds like you have no experience with modern models. Yes, you can push and push and push get it to agree with all manner of things, but off-rip on the first reply in a new context it will provide extremely grounded and rational takes on politics. It's a night and day difference compared to your average reddit comment or X post.

In my years of use and thousands and thousands of chat uses, I have literally never seen chatGPT provide a radical answer to a political question without me forcing it, heavy-handedly, to do so.


ChatQanon is coming




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: