Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imagine how warped your personality might become if you use this as an entire substitute for human interaction. Should people use this as bf/gf material we might just be further contributing to decreasing the fertility rate.

However we might offset this by reducing the suicide rate somewhat too.



In general, it's getting harder and harder for men and women to find people they want to be with.

https://www.pewresearch.org/social-trends/2021/10/05/rising-...

> roughly four-in-ten adults ages 25 to 54 (38%) were unpartnered – that is, neither married nor living with a partner. This share is up sharply from 29% in 1990.

https://thehill.com/blogs/blog-briefing-room/3868557-most-yo...

> More than 60 percent of young men are single, nearly twice the rate of unattached young women

> Men in their 20s are more likely than women in their 20s to be romantically uninvolved, sexually dormant, friendless and lonely. a.

> Young men commit suicide at four times the rate of young women.

Yes, chatbots aren't going to help but the real issue is something else.


> More than 60 percent of young men are single, nearly twice the rate of unattached young women

Is it rather a data problem? Who those young women have relationships with? Sure, relationships with an age gap are a thing, and so are polyamorous relationships, and homosexual relationships, but is there any indication that these are on a rise?


I tend to believe that a big part of the real issue is related to us not communicating how we feel and thus why I'm worried about how the chatbots may influence our ability (and willingness) to communicate such things. But they may help us open up more to them and therefore to other humans, I'm not sure.


With the loneliness epidemic, I fear that it's exactly what it will be used for.


I just find this idea ridiculous.

While I don't agree at all with you, I very much appreciate reading something like this that I don't agree at all with. This to me encapsulates the beauty of human interaction.

It is exactly what will be missing from language model interaction. I don't want something that agrees with me and I don't want something that is pretending to randomly disagree with me either.

The fun of this interaction is maybe one of us flips the other to their point of view.

I can completely picture how to take the HN API and the chatGPT API to make my own personal HN to post on and be king of the castle. Everyone can just upvote my responses to prove what a genius I am. That obviously would be no fun. There is no fun configuration of that app though either with random disagreements and algorithmic different points of view.

I think you can pretty much apply that to all domains of human interaction that is not based on pure information transfer.

There is a reason we are a year in and the best we can do are new stories about someone making X amount of money with their AI girlfriend and follow up new about how its the doom of society. It has nothing to do with reality.


>Imagine how warped your personality might become if you use this as an entire substitute for human interaction.

I was thinking this could be a good conversation or even dating simulator where more introverted people could practice and receive tips on having better social interactions, pick up on vocal queues, etc. It could have a business / interview mode or a social / bar mode or a public speaking mode or a negotiation tactics mode or even a talking to your kids about whatever mode. It would be pretty cool.


Since GPT is a universal interface I think this has promise, but the problem it's actually solving is that people don't know where to go for the existing good solutions to this.

(I've heard https://ultraspeaking.com/ is good. I haven't started it myself.)


Yeah, that's where I'm not sure in which direction it'll go. I played with GPT-3 to try to get it to reject me so I could practice dealing with rejection and it took a lot of hacking to make it say mean things to me. However, when I was able to get it to work, it really helped me practice receiving different types of rejections and other emotional attacks.

So I see huge potential in using it for training and also huge uncertainty in how it will suggest we communicate.


I've worked in emotional communication and conflict resolution for over 10 years and I'm honestly just feeling a huge swirl of uncertainty on how this—LLMs in general, but especially the genAI voices, videos, and even robots—will impact how we communicate with each other and how we bond with each other. Does bonding with an AI help us bond more with other humans? Will it help us introspect more and dig deeper into our common humanity? Will we learn how to resolve conflict better? Will we learn more passive aggression? Become more or less suicidal? More or less loving?

I just, yeah, feel a lot of fear of even thinking about it.


I think there are a few categories of people:

1) People with rich and deep social networks. People in this category probably have pretty narrow use cases for AI companions -- maybe for things like therapy where the dispassionate attention of a third party is the goal.

2) People whose social networks are not as good, but who have a good shot at forming social connections if they put in the effort. I think this is the group to worry most about. For example, a teenager who withdraws from their peers and spends that time with AI companions may form some warped expectations of how social interaction works.

3) People whose social networks are not as good, and who don't have a good shot at forming social connections. There are, for example, a lot of old people languishing in care homes and hardly talking to anybody. An infinitely patient and available conversation partner seems like it could drastically improve the quality of those lives.


I appreciate how you laid this out. I would most likely fall into category one and I don't see a huge need for the chatbots for myself, although I can imagine I might like an Alan-Watts-level companion more than many human friends.

I think I also worry the most about two, almost asking their human friends, "Why can't you be more like Her (or Alan Watts)?" And then retreating into the "you never tell me I'm wrong" chatbot, preferring the "peace" of the chatbot over the "drama" of interacting with humans. I see a huge "I just want peace" movement that seems to run away from the messiness of human interactions and seek solace in things that seem less messy, like drugs, video games, and other attachments/bonds, and chatbots could probably perform that replacement role quite well, and yet deepen loneliness.

As for three, I agree it may help as a short-term solution, and wonder what the long-term effects might be. I had a great aunt in a home for dementia, and wonder what effect it would have if someone with dementia speaks to a chatbot that hallucinates and makes up emotions.


I read a comic with a good prediction of what will happen:

1. Humans get used to robots nice communication, so now humans use robots to communicate with each other and translate their speech.

2. Humans stop talking without using robots, so now its just robots talking to robots and humans standing around listening.

3. Humans stop knowing how to talk, no longer understands the robots, the robots starts to just talk to each other and just keep the human around as pets they are programmed to walk around with.


Do you remember where you read that comic? Sounds like a fun read



Created my first HN account just to reply to this. I've had these same (very strong) concerns since ChatGPT launched, but haven't seen much discussion about it. Do you know of any articles/talks/etc. that get into this at all?


You might like Gary's blog on potential AI harms: https://garymarcus.substack.com/


Gary is an anti-ML crank with no more factual grounding than people who think AI is going to conquer the world and enslave you.


> AI is going to conquer the world and enslave you

That is actually a plausible outcome, if humans willingly submit to AI.


Dunno if you’d want a conversation partner with the memory of a goldfish though.


Memory is solvable tho.

Either through hacky means via RAG + prompt injections + log/db of interaction history or through context extensions.

IF you have a billion tokens of effective context, you might spent years until it is filled in full.


This is the case for now, but won't the context window keep getting bigger and bigger?


Movie "Her" became reality




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: