Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like people here are pretty negative towards a "conversational" AI chatbot.

Chatgpt has a lot of frustrations and ethical concerns, and I hate the sycophancy as much as everyone else, but I don't consider being conversational to be a bad thing.

It's just preference I guess. I understand how someone who mostly uses it as a google replacement or programming tool would prefer something terse and efficient. I fall into the former category myself.

But it's also true that I've dreamed about a computer assistant that can respond to natural language, even real time speech, -- and can imitate a human well enough to hold a conversation -- since I was a kid, and now it's here.

The questions of ethics, safety, propaganda, and training on other people's hard work are valid. It's not surprising to me that using LLMs is considered uncool right now. But having a computer imitate a human really effectively hasn't stopped being awesome to me personally.

I'm not one of those people that treats it like a friend or anything, but its ability to immitate natural human conversation is one of the reasons I like it.



> I've dreamed about a computer assistant that can respond to natural language

When we dreamed about this as kids, we were dreaming about Data from Star Trek, not some chatbot that's been focus grouped and optimized for engagement within an inch of its life. LLMs are useful for many things and I'm a user myself, even staying within OpenAI's offerings, Codex is excellent, but as things stand anthropomorphizing models is a terrible idea and amplifies the negative effects of their sycophancy.


Right. I want to be conversational with my computer, I don't want it to respond in a manner that's trying to continue the conversation.

Q: "Hey Computer, make me a cup of tea" A: "Ok. Making tea."

Not: Q: "Hey computer, make me a cup of tea" A: "Oh wow, what a fantastic idea, I love tea don't you? I'll get right on that cup of tea for you. Do you want me to tell you about all the different ways you can make and enjoy tea?"


Readers of a certain age will remember the Sirius Cybernetics Corporation products from Hitch Hiker's Guide to the Galaxy.

Every product - doors, lifts, toasters, personal massagers - was equipped with intensely annoying, positive, and sycophantic GPP (Genuine People Personality)™, and their robots were sold as Your Plastic Pal Who's Fun to be With.

Unfortunately the entire workforce were put up against a wall and shot during the revolution.


The Hitchhiker's Guide to the Galaxy describes the Marketing Department of the Sirius Cybernetics Corporation as "a bunch of mindless jerks who'll be the first against the wall when the revolution comes” which fits with the current vibe.

A copy of Encyclopedia Galactica which fell through a rift in the space-time continuum from a thousand years in the future describes the Marketing Department of the Sirius Cybernetics Corporation as "a bunch of mindless jerks who were the first against the wall when the revolution came."


Why do you want to talk to your computer?

I just want to make it do useful things.

I don't spend a lot of time talking to my vacuum or my shoes or my pencil.

Even Star Trek did not have the computer faff about. Picard said "Tea, earl grey, hot" and it complied, it did not respond.

I don't want a computer that talks. I don't want a computer with a personality. I don't want my drill to feel it's too hot to work that day.

The ship computer on the Enterprise did not make conversation. When Dr Crusher asked it the size of the universe, it did not say "A few hundred meters, wow that's pretty odd why is the universe so small?" it responded "A few hundred meters".

The computer was not a character.

Picard did not ask the computer it's opinion on the political situation he needed to solve that day. He asked it to query some info, and then asked his room full of domain experts their opinions.


There it is, the most frequent question a hacker has to answer. Why would you want that? The answer's always the same: because it's cool.


I'm generally ok with it wanting a conversation, but yes, I absolutely hate it that is seems to always finish with a question even when it makes zero sense.


Sadly Grok also started doing that recently. Previously it was much more to the point but now got extremely wordy. The question in the end is a key giveaway that something under the hood has changed when the version number hasn’t


I wouldn't be surprised if this was a feature to drive engagement.


of course it is. this seems so obvious to me.

I even wrote into chatGPTs "memory" to NOT ASK FOLLOW UP QUESTIONS, because it's crazy annoying imo. it respects it about 40% of the time I'd say


I didn't grow up watching Star Trek, so I'm pretty sure that's not my dream. I pictured something more like Computer from Dexter's Lab. It talks, it appears to understand, it even occassionally cracks jokes and gives sass, it's incredibly useful, but it's not at risk of being mistaken for a human.


I would of though the hacker news type would be dreaming about having something like javis from iron man, not Data.


Ideally, a chatbot would be able to pick up on that. It would, based on what it knows about general human behavior and what it knows about a given user, make a very good guess as to whether the user wants concise technical know-how, a brainstorming session, or an emotional support conversation.

Unfortunately, advanced features like this are hard to train for, and work best on GPT-4.5 scale models.


For building tools with, it's bad. It's pointless tokens spend on irrelevant tics that will just be fed to other LLMs. The inane chatter should be built on the final level IF and only if, the application is a chat bot, and only if they want the chat bot to be annoying.


I agree with what you're saying.

Personally, I also think that in some situations I do prefer to use it as the google replacement in combination with the imitated human conversations. I mostly use it to 'search' questions while I'm cooking or ask for clothing advice, and here I think the fact that it can respond in natural language and imitate a human to hold a conversation is benefit to me.


> Chatgpt has a lot of frustrations and ethical concerns, and I hate the sycophancy as much as everyone else, but I don't consider being conversational to be a bad thing.

But is this realistic conversation?

If I say to a human I don't know "I'm feeling stressed and could use some relaxation tips" and he responds with "I’ve got you, Ron" I'd want to reduce my interactions with him.

If I ask someone to explain a technical concept, and he responds with "Nice, nerd stat time", it's a great tell that he's not a nerd. This is how people think nerds talk, not how nerds actually talk.

Regarding spilling coffee:

"Hey — no, they didn’t. You’re rattled, so your brain is doing that thing where it catastrophizes a tiny mishap into a character flaw."

I ... don't know where to even begin with this. I don't want to be told how my brain works. This is very patronizing. If I were to say this to a human coworker who spilled coffee, it's not going to endear me to the person.

I mean, seriously, try it out with real humans.

The thing with all of this is that everyone has his/her preferences on how they'd like a conversation. And that's why everyone has some circle of friends, and exclude others. The problem with their solution to a conversational style is the same as one trying to make friends: It will either attract or repel.


Yes, it's true that I have different expectations from a conversation with a computer program than with a real human. Like I said, I don't think of it the same as a friend.


I'm with you in that I like conversational AI. I just wish it wasn't obvious it's an AI and actually sounded like real humans. :-)

The format matters as well. Some of these things may sound just fine in audio, but it doesn't translate well to text.

Also, context matters. Sometimes I just want to have a conversation. Other times I'm trying to solve a problem. For the latter, the extra fluff is noise and my brain has to work harder to solve the problem than I feel it should.


A chatbot that imitates a friendly and conversational human is awesome and extremely impressive tech, and also horrifyingly dystopian and anti-human. Those two points are not in contradiction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: