Covid-19: For updates and resources, head to UT's Protect Texas Together site.
Millions of people post comments and share information through social media and other online platforms. Some of that content can be objectionable, sometimes labeled as hate speech or disinformation. What, if anything, should Big Tech and social media companies do to moderate that content? Should government be involved?
Some platforms ban users, remove controversial posts, or add disclaimers, among other efforts. Do these tactics abridge users' free speech? Should free speech principles even apply to Big Tech companies, which are not traditional government actors? Section 230 of the federal Communications Decency Act, an important law in the debate over content moderation, provides immunity to Internet platforms from liability for the speech in which their users engage. What should be the scope of Section 230? Content moderation is a complicated aspect of Big Tech regulation that engenders debate and disagreement.
This section compiles select laws related to Section 230, Congressional hearings, books, academic articles, Congressional Research Service reports, and popular press articles related to Section 230 and content moderation. The academic articles are organized under various sub-topics.
Big Tech, CDA, censorship, collateral censorship, Communications Decency Act, content moderation, content regulation, digital, disinformation, Facebook, fake news, filter, First Amendment, free speech, good samaritan, Google, immunity, intermediary immunity, Internet, misinformation, online, platform, Section 230, social media, Twitter