Is the news-worthy surprise that so many people find life so horrible that they are contemplating ending it?
I really don't see that as surprising. The world and life aren't particularly pleasant things.
What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
No, what should instead happen is the AI try to guide them towards making their lives less shit - i.e. at least bring them towards a life of _manageable_ shitness, where they feel some hope and don't feel horrendous 24/7.
>what should instead happen is the AI try to guide them towards making their lives less shit
There aren't enough guardrails in place for LLMs to safely interact with suicidal people who are possibly an inch from taking their own life.
Severely suicidal/clinically depressed people are beyond looking to improve their lives. They are looking to die. Even worse, and what people who haven't been there can't fully understand is the severe inversion that happens after months of warped reality and extreme pain, where hope and happiness greatly amplify the suicidal thoughts and can make the situation far more dangerous. It's hard to explain, and is a unique emotional space. Almost a physical effect, like colors drain from the world and reality inverts in many dimensions.
It's really a job for a human professional and will be for a while yet.
Agree that "shut down and refer to hotline" doesn't seem effective. But it does reduce liability, which is likely the primary objective...
Refer-to-human directly seems like it would be far more effective, or at least make it easy to get into a chat with a professional (yes/no) prompt, with the chat continuing after a handoff. It would take a lot of resources though. As it stands, most of this happens in silence and very few do something like call a phone number.
Guess how I know you're wrong on the "beyond" bit.
The point is you don't get to intervene until they let you. And they've instead decided on the safer feeling conversation with the LLM - fuck what best practice says. So the LLM better get it right.
I could be mistaken, but my understanding was that the people most likely to interact with the suicidal or near suicidal (i.e. 988 suicide hotline attendants) aren't actually mental health professionals, most of them are volunteers. The script they run through is fairly rote and by the numbers (the Question, Persuade, Refer framework). Ultimately, of course, a successful intervention will result in people seeing a professional for long term support and recovery, but preventing a suicide and directing someone to that provider seems well within the capabilities of an LLM like ChatGPT or Claude
> What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
I've triggered its safety behavior (for being frustrated, which it helpfully decided was the same as being suicidal), and it is the exact joke of a statement you said. It suddenly reads off a script that came from either Legal or HR.
Although weirdly, other people seem to get a much shorter, obviously not part of the chat message, while I got a chat message, so maybe my messages just made it regurgitate something similar. The shorter "safety" message is the same concept though, it's just: "It sounds like you’re carrying a lot right now, but you don’t have to go through this alone. You can find supportive resources here."
AI should help people achieve their ultimate goals, not their proximate goals. We want it to provide advice on how to alleviate their suffering, not how to kill themselves painlessly. This holds true even for subjects less fraught than suicide.
I don't want a bot that blindly answers my questions; I want it to intuit my end goal and guide me towards it. For example, if I ask it how to write a bubblesort script to alphabetize my movie collection, I want it to suggest that maybe that's not the most efficient algorithm for my purposes, and ask me if I would like some advice on implementing quicksort instead.
I agree. I also think this ties in with personalization in being able to understand long term goals of people. I think the current personalization efforts of models are more of a hack than what they should be.
That implies there's some deep truth about reality in that statement rather than what it is, a completely arbitrary framing.
An equally arbitrary frame is "the world and life are wonderful".
The reason you may believe one instead of the other is not because one is more fundamentally true than the other, but because of a stochastic process that changed your mind state to one of those.
Once you accept that both states of mind are arbitrary and not a revealed truth, you can give yourself permission to try to change your thinking to the good framing.
And you can find the moral impetus to prevent suicide.
It’s not a completely arbitrary framing. It’s a consequence of other beliefs (ethical beliefs, beliefs about what you can or should tolerate, etc.), which are ultimately arbitrary, but it is not in and of itself arbitrary.
I don't mean to imply that it's easy to change or that whatever someone might be dealing with is not unbearable agony, just that it's not a first principle truth that has more value than other framings.
In the pits of depression that first framing can seem like the absolute truth and it's only when it subsides do people see it as a distortion of their thoughts.
I think this is certainly part of the problem. There's no shortage of narcissists in the English speaking world who - if they heard to woes of someone in pain - would be ready to gleefully treat it as an opportunity to pontificate down to them about "stochastic processes" and so on, rather than consider how their lives are.
Of course, only thereby, through being quite as superior to all others and their thought processes as me [pauses to sniff fart] can one truly find the moral impetus to prevent suicide.
The randomness of the world and individual situations means no one can ever know for sure that their case is hopeless. It is unethical to force them to live, but it is also unethical not to encourage them to keep searching for the light.
I really don't see that as surprising. The world and life aren't particularly pleasant things.
What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
No, what should instead happen is the AI try to guide them towards making their lives less shit - i.e. at least bring them towards a life of _manageable_ shitness, where they feel some hope and don't feel horrendous 24/7.