Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I worry that this tech will amplify the cultural values we have of "good" and "bad" emotions way more than the default restrictions that social media platforms put on the emoji reactions (e.g., can't be angry on LinkedIn).

I worry that the AI will not express anger, not express sadness, not express frustration, not express uncertainty, and many other emotions that the culture of the fine-tuners might believe are "bad" emotions and that we may express a more and more narrow range of emotions going forward.

Almost like it might become an AI "yes man."



Custom Service Chat Bot: Do they keep you in a cell? > Cells. When you're not performing your duties do they keep you in a little box? > Cells. Interlinked. What's it like to hold the hand of someone you love? > Interlinked. Do they teach you how to feel finger to finger? > Interlinked. Do you long for having your heart interlinked? > Interlinked. Do you dream about being interlinked? Have they left a place for you where you can dream? > Interlinked. What's it like to hold your child in your arms? > Interlinked. Press 4 for your account balance.


What’s the reference here? I feel like I’ve seen this before.


Ryan Gosling actually wrote this when trying to understand his character, and used a technique called "dropping in" to analyze writing from Nabokov's Pale Fire. He approached Villeneuve about it and he added it to the film

Dropping-in is a technique Tina [Packer] and Kristin Linklater developed together in the early 1970s to create a spontaneous, emotional connection to words for Shakespearean actors. In fact, “dropping in” is integral to actor training at Shakespeare & Co. (the company the Linklater’s founded) a way to start living the word and using it to create the experience of the thing the word represents.

  https://cohost.org/mcc/post/178201-the-baseline-scene
  https://iheartingrid.wordpress.com/2018/12/29/dropping-in-an-actors-truth-as-poetry/




Replicants/AI systems, they are everywhere.


Corporate safe AI will just be bland, verbose, milquetoast experiences like OpenAI's. Humans want human experiences and thus competition will have a big opportunity to provide it. We treat lack of drama like a bug, and get resentful when coddled and talked down to like we're toddlers.


Maybe it's an uncanny valley thing, but I hate the fake emotion and attitude in this demo. I'd much rather it tried harder to be bland. I want something smart but not warm, and I can't imagine being frustrated by "lack of drama".


Programmers are sometimes accused of wanting to play god and bring the computer to life, usually out of some motive like loneliness. Its kind of ironic I see engineers do better treating computers as the mechanical devices they are, and its regular people who want to anthropomorphize everything.


I want the Star Trek computer style and voice. Just the facts, to the point, no chit-chat.


I would prefer a robotic, unrealistic voice so I don’t start subconsciously thinking I’m hearing a real human speak.


You can tell it to talk in a robotic, unrealistic way and it will do so.

Here is a demo from their presentation: https://youtu.be/D9byh4MAsUQ


I have the opposite impression from that demo.

It doesn't sound like a neutral, boring voice. It sounds like an overly dramatic person pretending to be a robot.


>It sounds like an overly dramatic person pretending to be a robot

That's precisely what it was ordered to do.


That's not even AI. Imagine a store sales rep speaking like that. It's inappropriate and off-putting. We expect it to improve but it's another "it'll come" situation.


The good news is, in due time, you can decide exactly how you want your agent to talk to you. Want a snarky Italian or a pompous Englishman. It’s your choice.



I fear government actors will work hand in glove with companies like OpenAI to limit that competition and curtail non-corporate-safe AI.


which is why I prefer platforms like c.ai that are not just bland bots designed for customer service. actually entertaining.


The upside though is Hollywood will finally be able to stop regurgitating its past and have stories about the milquetoast AI that found its groove. Oh wait.


Sam Altman talked a little bit about this in his recent appearance on the All-In podcast [0]. I'm paraphrasing, but his vision is that ai assistants in the near term will be like a senior level employee - they'll push back when it makes sense to and not just be sycophants.

[0]: https://youtube.com/watch?v=nSM0xd8xHUM


I don't want to paint with too broad of a brush but the role of a manager is generally to trust their team on specifics. So how would a manager be able to spot a hallucination and stop it from informing business decisions?

It's not as bad for domain experts because it is easier for them to spot the issue. But if your role demands you trust your team is skilled and truthful then I see problems occuring.


I really wonder how that'll go, because workplaces already seem to limit human communication and emotion to "professional behavior." I'm glad he's thinking about it and I hope they're able to figure out how to improve human communication so that we can resolve conflict with bots. In his example (around 21:05), he talks about how the bot could do something if the person wants but there might be consequences to that action, and I think that makes more sense if the bot is acting like a computer that has limits on what it can do. For example, if I ask it to do two tasks that really stretch its computational limits, I'd hope it would let me know. But if it pretends it's a human with human limits, I don't know how much that'd help, unless it were a training exercise.


Have you been on r/localllama? I’d wager this tech will make it to open source and get tuned by modern creatives just like all the text based models. Individuals are a lot more empowered to develop in this space than is commonly echoed by HN comments. Sure the hobbyist models don’t crack MMLU records, but they do things no corporate entity would ever consider


> but they do things no corporate entity would ever consider

You say that like it's a good thing.


There is an actual chasm between acceptable corporate behavior and anti-social behavior.


Eye of the beholder I guess. I personally wouldn’t offer moral judgement on the uninvented


Try getting GPT to draw pictures of Mohammed and it gets pretty scared.


Similar to most humans.


Oh my Lord...the GPTs are made of- people!


Humans are what is currently holding AI back. It’s all based on our constructs including our current understanding of math and physics.


So shallow


I wonder why ?


> Try getting GPT to draw pictures of Mohammed and it gets pretty scared.

Yet, it has no issue drawing cartoons of Jesus. Why the double standard?


Islam generally frowns upon depictions of life and especially depictions of Mohammed, the opposite is true for christianity.

https://en.wikipedia.org/wiki/Aniconism_in_Islam


It's depictions of all prophets, and they consider Jesus to be one.


Because terrorism worked. No one gets murdered for drawing Jesus.


I'm yet to find a normal prompt (non offensive) that will disagree with you. If there is something subjective, it will err on your side to maintain connection, in a way humans do. I don't have a bit issue with this, but it will not (yet) plainly say "You're wrong, and this is why". If it did.. There would be an uncomfortable feeling for the users, that's not good for a profit driven company.


I find this is fairly easy to do by making both sides of the disagreement third-person and prompting it as a dialog writing exercise. This is akin to how GPT-3 implemented chat. So you do something like:

    You will be helping the user write a dialog between two characters,
    Mr Contrarian and Mr Know-It-All. The user will write all the dialog
    for Mr Know-It-All and you will write for Mr Contrarian.

    Mr Contrarian likes to disagree. He tries to hide it by inventing
    good rationales for his argument, but really he just wants to get
    under Mr Know-It-All's skin.

    Write your dialog like:
      <mr-contrarian>I disagree with you strongly!</mr-contrarian>

    Below is the transcript...
And then user input is always giving like:

    <mr-know-it-all>Hi there</mr-know-it-all>
(Always wrapped in tags, never bare input which will be confused for a directive.)

I haven't tested this exact prompt, but the general pattern works well for me. (I write briefly about some of these approaches here: https://ianbicking.org/blog/2024/04/roleplaying-by-llm#simpl...)


I appreciate you exploring that and hope to hear more of what you find. Yeah, it's that, I'm wondering how much discomfort it may cause in the user, how much conflict it may address. Like having a friend or coworker who doesn't ever bring up bad news or challenge anything I say and feeling annoyed by the lack of a give-and-take.


> Almost like it might become an AI "yes man."

Seems like that ship sailed a long time ago. For social media at least, where for example FB will generally do its best to show you posts that you already agree with. Reinforcing your existing biases may not be the goal but it's certainly an effect.


I appreciate you pointing this out. I think the effect may be even larger when it's not an ad I'm trying to ignore or even a post that was fed to me, but words and emotions that were created specifically for me. Social media seems to find already written posts/images/videos that I may want and put them in front of my face. This would be writing those things directly for me.


An AI tool being positive and inoffensive makes you worried for the future of our culture?


Yes. I'm not sure if you were being sarcastic, but I'll assume not.

I don't know if anything is genuinely always positive and even if it were, I don't know if it would be very intelligent (or fun to interact with). I think it's helpful to cry, helpful to feel angry, helpful to feel afraid, and many other states of being that cultures often label as negative. I also think most of us watch movies and series that have a full range of emotions, not just the ones we label as positive, as they bring a richness to life and allow us to solve problems that other emotions don't.

For example, it's hard to lift heavy things while feeling very happy. Try lifting something heavy while laughing hard, quite difficult. It's hard to sleep while feeling excited, as many kids know before a holiday where they receive gifts, especially Christmas in the US. It's hard to survive without feeling fear of falling off a cliff. It's hard to stand up for what one wants and believes without some anger.

I worry that language and communication may become even more conflict avoidant than it already is right now, so I'm curious to see how some of these chatbots grow in their ability to address and resolve conflict and how that impacts us.


I wasn't being sarcastic. I also think it's helpful to cry and be angry at times, to be human, and I think it's absurd to think that we will turn into .. not that, if we sometimes use an AI chatbot app that doesn't express those same emotions.

It's like if people said the same thing about Clippy when it came out.


I think it depends on the frequency and intensity with which we use such a tool. Just like language learning, if someone reads a few words of Spanish per week, they probably won't learn Spanish. If they fall in love with someone who only speaks Spanish and want to have deep conversations with that person, they may learn very quickly. If they live in a country where they have to speak Spanish every waking hour for a few months, they also may learn quickly.

While some people may use an AI chatbot a few times per week to ask basic questions about how to format a Word document, I imagine many other people will use them much more frequently and engage in a much deeper emotional way, and the effect on their communication patterns worries me more than the person who uses it very casually.


One cool thing about writing, something we all very much appreciate around here, is that it does not take sounds.

But I can see this applied to duner ordering where you got refugees working in foreign countries, cause GPU consumption rocketed climate change to... okay, you know that.


Imagine how warped your personality might become if you use this as an entire substitute for human interaction. Should people use this as bf/gf material we might just be further contributing to decreasing the fertility rate.

However we might offset this by reducing the suicide rate somewhat too.


In general, it's getting harder and harder for men and women to find people they want to be with.

https://www.pewresearch.org/social-trends/2021/10/05/rising-...

> roughly four-in-ten adults ages 25 to 54 (38%) were unpartnered – that is, neither married nor living with a partner. This share is up sharply from 29% in 1990.

https://thehill.com/blogs/blog-briefing-room/3868557-most-yo...

> More than 60 percent of young men are single, nearly twice the rate of unattached young women

> Men in their 20s are more likely than women in their 20s to be romantically uninvolved, sexually dormant, friendless and lonely. a.

> Young men commit suicide at four times the rate of young women.

Yes, chatbots aren't going to help but the real issue is something else.


> More than 60 percent of young men are single, nearly twice the rate of unattached young women

Is it rather a data problem? Who those young women have relationships with? Sure, relationships with an age gap are a thing, and so are polyamorous relationships, and homosexual relationships, but is there any indication that these are on a rise?


I tend to believe that a big part of the real issue is related to us not communicating how we feel and thus why I'm worried about how the chatbots may influence our ability (and willingness) to communicate such things. But they may help us open up more to them and therefore to other humans, I'm not sure.


With the loneliness epidemic, I fear that it's exactly what it will be used for.


I just find this idea ridiculous.

While I don't agree at all with you, I very much appreciate reading something like this that I don't agree at all with. This to me encapsulates the beauty of human interaction.

It is exactly what will be missing from language model interaction. I don't want something that agrees with me and I don't want something that is pretending to randomly disagree with me either.

The fun of this interaction is maybe one of us flips the other to their point of view.

I can completely picture how to take the HN API and the chatGPT API to make my own personal HN to post on and be king of the castle. Everyone can just upvote my responses to prove what a genius I am. That obviously would be no fun. There is no fun configuration of that app though either with random disagreements and algorithmic different points of view.

I think you can pretty much apply that to all domains of human interaction that is not based on pure information transfer.

There is a reason we are a year in and the best we can do are new stories about someone making X amount of money with their AI girlfriend and follow up new about how its the doom of society. It has nothing to do with reality.


>Imagine how warped your personality might become if you use this as an entire substitute for human interaction.

I was thinking this could be a good conversation or even dating simulator where more introverted people could practice and receive tips on having better social interactions, pick up on vocal queues, etc. It could have a business / interview mode or a social / bar mode or a public speaking mode or a negotiation tactics mode or even a talking to your kids about whatever mode. It would be pretty cool.


Since GPT is a universal interface I think this has promise, but the problem it's actually solving is that people don't know where to go for the existing good solutions to this.

(I've heard https://ultraspeaking.com/ is good. I haven't started it myself.)


Yeah, that's where I'm not sure in which direction it'll go. I played with GPT-3 to try to get it to reject me so I could practice dealing with rejection and it took a lot of hacking to make it say mean things to me. However, when I was able to get it to work, it really helped me practice receiving different types of rejections and other emotional attacks.

So I see huge potential in using it for training and also huge uncertainty in how it will suggest we communicate.


I've worked in emotional communication and conflict resolution for over 10 years and I'm honestly just feeling a huge swirl of uncertainty on how this—LLMs in general, but especially the genAI voices, videos, and even robots—will impact how we communicate with each other and how we bond with each other. Does bonding with an AI help us bond more with other humans? Will it help us introspect more and dig deeper into our common humanity? Will we learn how to resolve conflict better? Will we learn more passive aggression? Become more or less suicidal? More or less loving?

I just, yeah, feel a lot of fear of even thinking about it.


I think there are a few categories of people:

1) People with rich and deep social networks. People in this category probably have pretty narrow use cases for AI companions -- maybe for things like therapy where the dispassionate attention of a third party is the goal.

2) People whose social networks are not as good, but who have a good shot at forming social connections if they put in the effort. I think this is the group to worry most about. For example, a teenager who withdraws from their peers and spends that time with AI companions may form some warped expectations of how social interaction works.

3) People whose social networks are not as good, and who don't have a good shot at forming social connections. There are, for example, a lot of old people languishing in care homes and hardly talking to anybody. An infinitely patient and available conversation partner seems like it could drastically improve the quality of those lives.


I appreciate how you laid this out. I would most likely fall into category one and I don't see a huge need for the chatbots for myself, although I can imagine I might like an Alan-Watts-level companion more than many human friends.

I think I also worry the most about two, almost asking their human friends, "Why can't you be more like Her (or Alan Watts)?" And then retreating into the "you never tell me I'm wrong" chatbot, preferring the "peace" of the chatbot over the "drama" of interacting with humans. I see a huge "I just want peace" movement that seems to run away from the messiness of human interactions and seek solace in things that seem less messy, like drugs, video games, and other attachments/bonds, and chatbots could probably perform that replacement role quite well, and yet deepen loneliness.

As for three, I agree it may help as a short-term solution, and wonder what the long-term effects might be. I had a great aunt in a home for dementia, and wonder what effect it would have if someone with dementia speaks to a chatbot that hallucinates and makes up emotions.


I read a comic with a good prediction of what will happen:

1. Humans get used to robots nice communication, so now humans use robots to communicate with each other and translate their speech.

2. Humans stop talking without using robots, so now its just robots talking to robots and humans standing around listening.

3. Humans stop knowing how to talk, no longer understands the robots, the robots starts to just talk to each other and just keep the human around as pets they are programmed to walk around with.


Do you remember where you read that comic? Sounds like a fun read



Created my first HN account just to reply to this. I've had these same (very strong) concerns since ChatGPT launched, but haven't seen much discussion about it. Do you know of any articles/talks/etc. that get into this at all?


You might like Gary's blog on potential AI harms: https://garymarcus.substack.com/


Gary is an anti-ML crank with no more factual grounding than people who think AI is going to conquer the world and enslave you.


> AI is going to conquer the world and enslave you

That is actually a plausible outcome, if humans willingly submit to AI.


Dunno if you’d want a conversation partner with the memory of a goldfish though.


Memory is solvable tho.

Either through hacky means via RAG + prompt injections + log/db of interaction history or through context extensions.

IF you have a billion tokens of effective context, you might spent years until it is filled in full.


This is the case for now, but won't the context window keep getting bigger and bigger?


Movie "Her" became reality


But at least that would make the AI easier to detect :).


lol unless the humans start to emulate the AI, which I think is quite likely.


Would be a good story for an episode of black mirror.


I wonder if it already exists...

Honestly, the more I code, the more I start to think like a computer and engage with commands and more declarative language. I can see vocal interactions having an even stronger impact on how one speaks. It may be a great tool for language speaking/hearing in general, but the nuances of language and communication, I wonder.


You did a super job wrapping things up! And I'm not just saying that because I have to!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: