Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here, maybe this article will help make you feel more sure. What you're describing is parody or satire. At least in the US, it's a very protected form of speech.

https://www.theguardian.com/law/2022/oct/04/the-onion-defend...

And here's their actual brief. It was sent to the actual Supreme Court, despite being funny, something nobody on the court has ever been nor appreciated.

www.supremecourt.gov/DocketPDF/22/22-293/242596/20221006144840674_Novak%20Parma%20Onion%20Amicus%20Brief.pdf



But Bing doesn’t present its results as parody or satire, and they don’t intrinsically appear to be such. They’re clearly taken as factual by the public, which is the entire problem. So how is this relevant?

> funny, something nobody on the court has ever been nor appreciated.

Scalia had his moments.


I agree that "you're talking to an algorithm that isn't capable of exclusively telling the truth, so your results may vary" isn't QUITE parody/satire, but IDK that I can take "everyone believe ChatGPT is always telling the truth about everything" as a good faith read either and parody felt like the closest place as IANAL.

Intent is the cornerstone of slander law in the US, and you would need a LOT of discovery to prove that the devs are weighting the scale in favor of bad outcomes for some people (and not just like, end users feeding information into the AI).

TL;dr- Everyone's stance on this specific issue seems to depend on whether you believe people think these AI chatbots exclusively tell them the truth, and I just don't buy that worldview (but hey, I'm an optimist that believe that humanity has a chance, so wtf do I know?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: