Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think a reasonable person would immediately understand that the outputs of the bigram model were not statements of fact.

Interesting considering the context that you'd expect people to know what a bigram model is, or how it would be different.

Any other kind of model isn't any less "dangerous" to unreasonable people like the blog post writer, it's just more obscure, especially right now, than ChatGPT.



I'm not fan of conservative law professors generally but I can't see what's unreasonable about the argument he's making here. Broadcasting lies about someone is bad for them and the "those fools should know this is bs so I'm not responsible" defense is itself bs.

Edit: While I might not agree with Turley, his wikipedia bio makes him sound far more principled and consistent than the average "public intellectual" today.


"those fools should know this is bs so I'm not responsible"

Is actually something you would absolutely argue in a US defamation trial. Defamation damages need to stem from people actually believing the falsehood. If the bad press from a false statement leads to someone, say, losing their job, those are damages, but the plaintiff would need to prove it was because their employer believed the lie.


the only one broadcasting the lies seems to be Turley himself: chatGPT didn't share the conversation with the internet, he did




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: