My experience is that it tries to look at your situation in an objective way, and tries to help you to analyse your thoughts and actions. It comes across as very empathetic though, so there can lie a danger if you are easily persuaded into seeing it as a friend.
That is so reductive of an analysis that it is almost worthless. Technically true, but very unhelpful in terms of using an LLM.
It is a first principle though so it helps to “stir the context windows pot” by having it pull in research and other shit on the web that will help ground it and not just tell you exactly what you prompt it to say.
Hmmmm i didn't know that... so a machine is not human is your point? Look, i know it doesn't try, just like a sorting algo does not try to sort, or an article does not try to convey an opinion and a law does not try to make society more organized.
Claudes have lots of empathy. The issue is the opposite - it isn't very good at challenging you and it's not capable of independently verifying you're not bullshitting it or lying about your own situation.
But it's better than talking to yourself or an abuser!
It's about the same as talking to yourself, LLMs simply agree with anything you say unless it is directly harmful. Definitely agree about talking to an abuser, though.
Sometimes people indeed just need validation and it helps them a lot, in that case LLMs can work. Alternatively, I assume some people just put the whole situation into words and that alone helps.
But if someone needs something else, they can be straight up dangerous.
> It's about the same as talking to yourself, LLMs simply agree with anything you say unless it is directly harmful.
They have world knowledge and are capable of explaining things and doing web searches. That's enough to help. I mean, sometimes people just need answers to questions.
In one way it's potentially worse than talking to yourself. Some part of you might recognize that you need to talk to someone other than yourself; an LLM might make you feel like you've done that, while reinforcing whatever you think rather than breaking you out of patterns.
Also, LLMs can have more resources and do some "creative" enabling of a person stuck in a loop, so if you are thinking dangerous things but lack the wherewithal to put them into action, an LLM could make you more dangerous (to yourself or to others).
Using an LLM for therapy is like using an iPad as an all-purpose child attention pacifier. Sure, it’s convenient. Sure there’s no immediate harm. Why a stressed parent would be attracted to the idea is obvious… and of course it’s a terrible idea.
It’s nothing like that. Using an iPad for study assistance is a conduit to many credible sources and tools. They can be evaluated using context, reputation, reviews, etc.
An LLM generates non-deterministic information using sources you can’t even know, let alone evaluate, and is more primed to agree with you than give critical and objective evaluation. It is, at best, like asking your closest parent to help you through difficult interpersonal situations: The interaction is probably, subconsciously, going to be skewed enough towards soothing you that you just can’t consider it objective. The difference is that with an LLM, that’s deliberate. It’s designed in.
Using LLMs for therapy is so deeply dystopian and disgusting, people need human empathy for therapy. LLMs do not emit empathy.
Complete disaster waiting to happen for that individual.