Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are the harmful things chatgpt is saying? I thought it only said wrong things with great confidence


OpenAI is explicitly trying to filter ChatGPT from saying things that may insult people, describe criminal activity (like how to hotwire a car or how to make methamphetamine), incite self-harm and the likes. Instead, you will get a generic "I'm a language model and can't answer that" response. I did not see much people complain about ChatGPT in particular (most likely because the filter works quite well in non-adverserial scenarios), but previous language models have famously started insulting people and have gone as far as to try to convince people to commit suicide. In assuming best interest, though, the grandparent most likely referred to worries about it expressing non-politically correct views.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: