Recent moves by social media platforms to remove radio, video shows, and other content from controversial Infowars pundit Alex Jones for violating their rules against hate speech have once again prompted complaints from conservatives about the legal protections (i.e. Section 230 of the Communications Decency Act) which allow sites to post user-generated content.
Everyday there are new headlines about problematic content online — from misinformation to harassment to criminal activity — and with them come new calls to regulate online platforms that host user speech. While there is certainly an important conversation to be had about how to improve online discourse, the debate about amending Section 230 is fundamentally counterproductive.
In 1996, Congress made the wise decision to help the growth of online platforms by ensuring that legal challenges to online speech should generally be directed at the speakers rather than the platforms. Section 230 of the Communications Decency Act enshrined this principle into law and has been the bedrock of the Internet ever since.
Last month, the House Judiciary Committee held a hearing on the content moderation practices of tech companies. Many of the lawmakers in attendance complained of a supposed anti-conservative bias in these companies’ content moderation practices. Online platforms, they argued, were systematically censoring speech from conservative users. Maybe it’s time to eliminate Section 230’s protections, they mused.
Putting aside the question of whether or not such anti-conservative bias even exists online (the Internet has, after all, allowed more conservative voices to reach an audience than was ever possible in the pre-digital era), amending Section 230 would only increase online censorship and cement the market power of the largest platforms accused of engaging in these practices.
Why? Without Section 230’s protections, websites would face an avalanche of lawsuits over every piece of questionable content users put online. Given the massive volume of content even small online platforms host, there is simply no way for a website to manually (and accurately) review and disable access to all this material. This technical reality was the motivating force behind Section 230’s creation in the first place.
If Section 230 protections disappear, platforms will simply delete any content that has even the slightest chance of giving rise to legal liability. The costs of defending a lawsuit over user speech are infinitely greater than the costs of simply deleting it.
Silencing opposing viewpoints online will be as simple as complaining to a website that some content is illegal. Lacking the resources to either thoroughly verify the accuracy of these claims or bear the risk of lawsuits, websites — particularly startups — will make heavy use of the delete button instead. This will lead to more than just anti-conservative bias; it will threaten the ability of any unpopular opinion to be voiced online and make it virtually impossible for all but the largest and richest companies to host user speech.
If lawmakers are truly concerned with any kind of ideological bias controlling free expression online, Section 230 is the very tool that will help counter that trend. Section 230 allows companies to engage in balanced monitoring of user content, meaning a platform can both host unpopular or controversial speech and remove legitimately objectionable speech without fear of ruinous litigation. Under the protection of this law, platforms are less likely to err towards over-moderation because the fear of legal repercussions is reduced.
Section 230 encourages free speech and the most responsible moderation of all content on online platforms. It is because of this foundational law that voices across the political and ideological spectrum are able to find a home, and an audience, online.
To learn more about Section 230, read our report on intermediary liability here.