The penalties for unknowingly possessing or transmitting child porn are far too harsh, both in this case and in general (far beyond just Google's corporate policies).
Again, to avoid misunderstandings, I said unknowingly - I'm not defending anything about people who knowingly possess or traffic in child porn, other than for the few appropriate purposes like reporting it to the proper authorities when discovered.
The issue is that when you make ignorance a valid defense, the optimal strategy is to deliberately turn a blind eye, as it reduces your risk exposure. It further gives refuge for those who can convincingly feign ignorance.
We should make tools readily available and user friendly so it is easier for people to detect CSAM that they have unintentionally interacted with. This both shields the innocent from being falsely accused, and makes it easier to stop bad actors as their activities are detected earlier.
No, it should be law enforcement job to determine intent, not a blanket you're guilty. This being Actus Reus is a huge mess that makes it easy to frame people and get in trouble with no guilty act.
Determining intent takes time, is often not possible, and encourages people to specifically avoid the work to check if something needs to be flagged. Not checking is at best negligent. Having everybody check and flag is the sensible option.
Everyone in this case meaning "people demonstrated to be in possession of child porn who took no action". And they are not assumed guilty, they are exactly as innocent as anyone with a dead body in their fridge that they also "had no idea about."
That's the root problem with all mandated, invasive CSAM scanning. (Non-signature based) creates an unreasonable panopticon that leads to lifelong banishment by imprecise, evidence-free guessing. It also hyper-criminalizes every parent who accidentally takes a picture of their kid without being fully dressed. And what about DoS victims who are anonymously sent CSAM without their consent to get them banned for "possession"? While pedo is gross and evil no doubt, but extreme "think of the children" measures that sacrifice liberty and privacy create another evil that is different. Handing over total responsibility and ultimate decision-making for critical matters to a flawed algorithm is lazy, negligent, and immoral. There's no easy solution to any such process, except requiring human review should be the moral and ethical minimum standard before drastic measures (human in the loop (HITL)).
On one hand, I would like to say this could happen to anyone, on the other hand, what the F?? why are people passing around a dataset that contains child sexual abuse material??, and on another hand, I think this whole thing just reeks of techy-bravado, and I don’t exactly blame him. If one of the inputs of your product (OpenAI, google, microsoft, meta, X) is a dataset that you can’t even say for sure does not contain child pornography, that’s pretty alarming.