Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you should read the article again (I'm assuming you did read it before commenting, and must have just missed the relevant parts):

> contrary to what many have been led to believe, Parler’s Terms of Service includes a ban on explicit advocacy of violence, and they employ a team of paid, trained moderators who delete such postings. Those deletions do not happen perfectly or instantaneously — which is why one can find postings that violate those rules — but the same is true of every major Silicon Valley platform.



In AWS’ letter to Parler (https://int.nyt.com/data/documenttools/acc4b9b6a31b55bf/2/ou...) they explicitly called out the fact that Parler told them their plan moving forward was to use volunteer moderators.

AWS also pointed out clear examples of illegal content that Parler did not remove.

Greenwald has a good point to make but it doesn’t appear that all of what he’s asserting is true.


A whole central point of Parler was that they don't surveil and analyze their users. From reading that letter it seems like that is the problem AWS had with them. AWS wanted them to violate their principles of not surveilling users by introducing automated moderation bots.

I would guess AWS had a solid point that without it there's no way Parler could keep up with the moderation, but without knowing more details about what Parler was planning, it's hard to know.

Perhaps Parler had an interesting plan to crowd source moderation? Sites like Stack Overflow and Hacker News do that sort of thing with some success (augmented by real moderators). I have no idea if that's what they were thinking of, but it's an interesting thought.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: