Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

people are overcomplicating this, the big picture is simple af:

if a therapist was ever found to have said to this to a suicidal person, they would be immediately stripped of their license and maybe jailed.



True. But it feels like a fairer comparison would be with a huge healthcare company that failed to vet one of its therapists properly, so a crazy pro-suicide therapist slipped through the net. Would we petition to shut down the whole company for this rare event? I suppose it would depend on whether the company could demonstrate what it is doing to ensure it doesn’t happen again.


Maybe you shouldn't shut down OpenAI over this. But each instance of a particular ChatGPT model is the same as all the others. This is like a company that has a magical superhuman therapist that can see a million patients a day. If they're found to be encouraging suicide, then they need to be stopped from providing therapy. The fact that this is the company's only source of revenue might mean that the company has to shut down over this, but that's just a consequence of putting all your eggs in one basket.


But you would have to be a therapist. If a suicidal person went up to a stranger and started a conversation, there would be no consequences. That's more analogous to ChatGPT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: