Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Maybe AI detection is more ethically fraught since you'd need to keep hold of the CSAM until the next training run,

why?

the damage is already done





I would think there's more possibility of it leaking or being abused in a giant stockpile. Undoubtedly, those training sets would be commercialized in some way, potentially causing some to see that as adding insult to injury.

Some victims feel this way. Some do not.

Why would you think that? Every distribution, every view is adding damage, even if the original victim doesn't know (or even would rather not know) about it.

I don't think AI training on a dataset counts as a view in this context. The concern is predators getting off on what they've done, not developing tools to stop them.

Debating what counts as a view is irrelevant. Some child pornography subjects feel violated by any storage or use of their images. Government officials store and use them regardless.

I don't think it's how it works.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: