> it is clear that the web creates an environment in which extremism thrives.
No, it's not clear. Please elaborate.
The strongest weapon against extremist ideas is other views on the issues that created the extremism in the first place. Most extremists don't want their people to communicate freely, for a reason.
Your argument presumes that the natural tendency of the web is to foster a free and open market of ideas, rather than a system of closed echo chambers each inhabiting distinct realities in a post-truth landscape.
That by no means seems to be a sure assumption today. If anything, the question now is whether the latter state of affairs might not be the default, and if so, whether it is reversible at all.
Just like people, not markets, create deadweight loss, rent, and negative externalities, and people, not institutions, cause systemic racism? /s
The web is a complex system that is more than capable of producing outcomes that are drastically different from the sum of its inputs (i.e. "people"). Based on what kinds of mechanism design and incentive structures are put in place, it is not only probable, but more mathematically likely that even a web filled only with rational correspondents will eventually devolve into a mess filled with falsehoods and echo chambers.
The important corollary to this insight? There's an entire class of structural remedies we can take that you're not even considering, before we even approach anything remotely resembling censorship on the individual.
You're talking about the "social media" in specific and not the web in general. Yes, social media create echo chambers, that's what they are for. The web never did, it created search engines to find arbitrary content, self-published blogs, online courses, Wikipedia etc. etc.
The web most certainly supported bubbles: who you link to has a huge impact on where readers go next and the language people use affects which results they get for searches (even before Google started personalizing things based on your past history).
It was slower than Twitter/Facebook but the effect is exactly the same as the social media problem. I have a relative who went down the conservative conspiracy theory rabbit hole in the years after 9/11 and the web was a huge accelerant for his intellectual decline because once he found a community of fellow travelers he’d stay in that network of blogs where they strenuously resisted outside sources or the general idea of objective reality. When those communities moved primarily onto social media, they moved faster and got wider range but the practices didn’t change at all.
You are arguing that the choices the web offers result in bubbles because some people choose to stick with particular types of content after discovering it through links. The web however offers just as many ways out of bubbles and there is no better remedy than this freedom. Your relative would not have been saved by a "nanny web" strictly policed by authorities, he would have stayed in a social bubble of personal relationships where the information choices available on the web are inaccessible. It's how extremism thrived before social media.
Having such freedom available is of no use if people won't make use of it. You assume a "nanny web" is about taking away these freedoms, when in fact it's about incentivizing people to actually utilize them.
I agree with you insofar as Mill's free market of ideas is the best remedy against bigotry and extremism, but you have the wrong idea about how to achieve that -- unregulated laissez-faire "freedom" on the web does not result in actual free discourse. On the contrary, that's just a recipe for a web overrun by all manner of adware, malware, spyware, propagation of unconscious self-censorship by means of universal targeted and mass surveillance, and straight-up disinformation ops and public opinion manipulation from hostile actors. How are you going to have sincere exchanges of ideas at scale over all that noise?
People are just as free to follow diverse viewpoints on social media but for many people that’s not the point. They want that bubble and are determine to avoid it. That doesn’t change with the choice of forum.
Similarly, there’s not that big difference on “policing” - the social media companies enforce very basic terms of service but most people use web hosts with relatively similar terms. The major difference is the discovery mechanism (and the fact checking applied to certain politicians’ accounts which aren’t banned on public interest grounds) which is closer to how search engines avoid sending links to certain sites with bad reputations. A lot of what Twitter/Facebook block will also be dropped by Google or many free web hosts. You can definitely get away with more but I don’t see much reason to believe that it’s a major difference rather than minor variations depending on which companies and content you’re talking about.
No, it's not clear. Please elaborate.
The strongest weapon against extremist ideas is other views on the issues that created the extremism in the first place. Most extremists don't want their people to communicate freely, for a reason.