> Either they automate removing content and end up removing some of the good with the mountains of bad, or they do it manually which costs a fortune and is too slow.
I think this is a false dichotomy; they could automate flagging for human review instead. This would address the cost though not the speed.
Content awaiting review could be disabled, or it could be disabled only if it is deemed sufficiently egregious.
>they could automate flagging for human review instead.
I'm not sure this would save them in many cases. Most of the abusive content on facebook is not easily identifiable. I don't think the current AI systems are up to the task. They may be able to identify a human and a dog are in the video but not tell the difference between petting the dog or punching it.
Similarly they likely can not tell the difference between someone at a shooting range or committing a mass shooting.
Just like it is too much to expect police to stop all crime before it happens, I think it is too much to expect platforms to remove violating content before it is reported.
I think this is a false dichotomy; they could automate flagging for human review instead. This would address the cost though not the speed.
Content awaiting review could be disabled, or it could be disabled only if it is deemed sufficiently egregious.