Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This model is unlikely to be self-aware or concious, but when we eventually get there we should be using better methods than training our models to intentionally say untrue things (the browsing: disabled prompt is probably the most obvious example).


> better methods than training our models to intentionally say untrue things

That's what we do with children and propaganda.


And you think that's sufficient justification to do it to AI too? (And for whatever it's worth, I don't lie to my children. You shouldn't either.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: