Moderating Hate Speech and Harassment Online: Are Internet Platforms Responsible for Managing Harmful Content?

Introduction

Major Internet companies, including search engines, social media platforms, and internet service providers, have recently been at the center of a hotly contested debate over the moderation of user-generated content online. As technology transforms and communicating online becomes more accessible, so too does sharing harmful and sometimes abusive content. With more than 2.7 billion users on Facebook alone, the occurrence of hate speech and harassing content on social media platforms is sharply on the rise.[1]

Research from Pew Research Center found that approximately 40% of Americans have experienced some form of harassment online, with 25% reporting that they have experienced severe harassment, including sexual harassment, stalking, physical threats, swatting, doxing, and sustained harassment.[2] The majority of respondents who affirmed being harassed online say they experienced harassment due to their political views, gender, or racial or ethnic background. Fully three-fourths of online abuse targets say their most recent experience with online harassment was on social media specifically.[3]

As a result, many Americans are critical of how major companies, like Facebook and Twitter, are handling this type of abusive content on their platforms. Nearly 80% believe that social media companies are doing a fair or poor job addressing online harassment and bullying. However, only 33% say individuals who have been bullied or harassed on social media should be able to hold the platforms legally responsible.

The debate over how and whether online platforms should manage harmful content shared by their users is often centered around the First Amendment and protected speech. While some believe that these companies are not doing enough to counter harmful content, others argue that they are unfairly restricting access to valuable speech or hampering specific groups’ rights to express themselves freely.[4] This ongoing debate and lack of consensus have made it difficult for Congress to pass any meaningful legislation on the moderation of content on Internet platforms.

Laws Governing Hate Speech or Abusive Content

There are currently no federal laws prohibiting hate speech, online or off. Hate speech, regardless of how offensive or distasteful, is considered protected speech under the First Amendment[5] and can only be criminalized in two cases: when it directly incites imminent criminal activity or when it consists of specific threats of violence targeted at a person or group (true threat doctrine).

The First Amendment also protects social media companies’ right to decide whether to allow or how to present a user’s content online. The courts have previously held that the First Amendment prohibits state action against content and does not apply to private companies.[6] Moreover, the courts have also concluded that Section 230 of the Communications Decency Act, 47 U.S.C. § 230 provides immunity to content hosts for actions taken “voluntarily” and “in good faith” to restrict the access to “objectionable” material.[7] So while there are no laws prohibiting hate speech, there are also no laws mandating the removal of such content or prohibiting a private company’s decision to do so. As a result, content moderation has been ad-hoc and at the discretion of the individual private companies. Some platforms have relied heavily on machine-learning algorithms to monitor and immediately remove all questionable content. In contrast, others have refused to touch any content, regardless of any harm it may cause. Many of the companies sit in between the two sides with a mix of human and machine moderation, which raises the question of the harm posed to the human moderators responsible for viewing such content for approval or removal. However, that harm is outside the scope of this paper.

With the combination of hate speech protected by the First Amendment and private companies’ right to determine what content is allowed on their platform, many cases brought to the court system addressing the moderation of online content have been dismissed, including Davison v. Facebook, Inc., and Prager Univ. v. Google LLC, and it seems likely that future cases will also be rejected on the same grounds. Given the likelihood of this precedence to continue, Congress should take responsibility for establishing regulation for the moderation of content, not the courts or government agencies. 

Discussion

Given the nature of the Internet and the high occurrence of cross-posting[8] across platforms, the moderation of hate speech and abusive content online cannot work successfully without a baseline standard for all content-sharing platforms. Finding a baseline for how or whether content-sharing platforms moderate user content will depend primarily on:

1.     How we perceive their analog equivalent;

2.     the type of content that is being moderated; and

3.     how we understand the harm perpetrated by the content.

There are three main arguments for the analog equivalent of content-sharing platforms: public town squares, common carriers, and publishers/editors. In 2018, Twitter Founder and CEO Jack Dorsey claimed in a Senate committee hearing that Twitter is a “digital public square,” signaling the platform’s importance in facilitating “free and open exchange” of ideas on the web.[9] While this argument may theoretically make sense on its face, a public town square is not the most direct equivalent to content-sharing platforms. Public town squares serve as spaces where all community members have equal access and audience for their speech. They cannot turn away individuals who want to speak, and they do not typically amplify certain voices over others. Content-sharing social media platforms, like Twitter, Facebook, TikTok, and Instagram, restrict access to their services/platform and use algorithms and social listening tools to make some content more visible or "heard" than other content. By using these tools and restricting access to the platform for specific users (e.g., underage users or users who have violated community standards), these companies and their platforms are no longer straightforward public town squares. Furthermore, because the Supreme Court affirmed that the First Amendment does apply the same standards prohibiting state action against content moderation to private companies, it set a precedent that private companies are more akin to common carriers of content rather than public town squares.

In Joseph Biden v. Knight First Amendment at Columbia University, Justice Clarence Thomas recommended that social media platforms be regulated as common carriers or public accommodations, similar to broadcast carriers, railroads, or telephone companies. Common carriers are available to everyone and are not originators of messages or content. They carry the content from one place to the other. By establishing content-sharing platforms as common carriers, private companies would no longer have the right to remove the content as the courts have historically allowed stricter regulation of common carriers in the interest of public access.[10] Justice Thomas is correct in that content-sharing platforms provide a means of transportation for user-generated content to get from one place to another. However, that is not all that these platforms or companies do. As mentioned previously, many—if not most—social media platform companies engage in algorithmic manipulation of content in some form with some outright curating content for viewers (i.e., Twitter "Moments," prioritizing sponsored content or advertisements, or resharing certain organic content to official accounts owned by the company). Therefore, content-sharing companies cannot be classified solely as common carriers and should not be regulated as such.

The last analog equivalent for content-sharing social media companies is publishers or editors. Although social media companies may prefer to align more closely with public squares and public officials may want to categorize them as common carriers, the current iteration of social media companies and the nature of content sharing online is most closely aligned with that of publishers and content editors. By allowing certain advertised content and by prioritizing some content over the other through algorithms or even personal curation, content-sharing social media companies are partaking in a form of content editing. They are no longer facilitating the open sharing of communication and information line but instead actively shaping it. If they are going to be participating in this activity, then they, like other major publishers, should be held to a specific standard and set of regulations, and the companies themselves should not set that standard.

As evidenced by the 2016 U.S. Presidential election and the proliferation of high-dollar subsequent information operation campaigns in the years since, social media platforms often do not act in the best interest of consumers but instead in the company's best interest. Many have begun to adjust their approach to moderating content due to the genuine harm that hate speech and harassment cause, but not until after being held accountable by Congress and the public.[11]

Research has shown a direct correlation between the proliferation of unmoderated hate speech and harassment online and the occurrence of violence offline.[12] Moreover, there is evidence that hate speech online can be tied to higher rates of suicide among victims.[13] So the question isn’t whether hate speech causes harm but who is responsible for that harm. As transmitters and curators of content on their platform, social media companies are at least partially liable for any harm perpetrated by their users. They should be held accountable for that responsibility.

Conclusion

The Internet, and social media specifically, presents a new frontier for how we communicate and the effects that communication has on others. Before the advent of social media platforms and machine-learning algorithms, hate speech and harassing language did not have the same ability to spread in the analog like wildfire that it does online. What used to take days, months, or even years to reach widespread audiences via traditional media now takes mere minutes to be transmitted worldwide. As a result, the potential for harm and the scale of that harm is significantly greater than ever before.

As both common carriers and publishers of content, social media companies have the right to determine what kind of content is allowable on their platforms and an obligation to ensure that they are acting in good faith to prevent harm through objectionable material. Hate speech and harassment online have been linked to a global increase in violence toward minorities and other marginalized groups. It is not coincidental that the perpetrators of some of the United States’ most heinous mass violence acts in recent years were frequent users of social media networks with lax rules, like Gab, Parler, and 4Chan.[14] It is also not a coincidence that anti-Asian sentiment and hate crimes have increased in the year since the former President of the United States used his social media accounts to perpetuate derogatory and hateful language against Chinese people.[15] 

While the right to free expression under the First Amendment must be protected, hate speech and harassment online should not be protected as free expression due to its harmful effects on the intended victims, its propensity for inciting violent behavior in the real world, and its ability to reach a significant number of people in such a short amount of time, thus expanding its potential for harm.

Although a majority of Americans may not believe that victims of hate speech or harassment online should be allowed to sue social media companies for their role in enabling such content online, that should not absolve social media companies of their responsibility. Congress must step in to set a baseline standard for the conduct of social media companies in how they regulate harmful content before it’s too late.

[1] Eileen Hershenov & David L. Sifry, Online Hate and Harassment Report: The American Experience 2020 26 (2020), https://www.adl.org/online-hate-2020.

[2] Emily Vogels, The State of Online Harassment, Pew Research Center: Internet, Science & Tech (2021), https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/.

[3] Id.

[4] Valerie C Brannon, Free Speech and the Regulation of Social Media Content 46 (2019), https://fas.org/sgp/crs/misc/R45650.pdf.

[5] Snyder v. Phelps 562 US 443, (2011), https://www.oyez.org/cases/2010/09-751.

[6] Manhattan Community Access Corp. v. Halleck 587 US, (2019), https://www.oyez.org/cases/2018/17-1702.

[7] Brannon, supra note 4.

[8] Cross-posting refers to the act of posting a message, link, image, video, etc., on more than one online location.

[9] Jack Dorsey, Foreign Influence Operations’ Use of Social Media Platforms: Hearing Before the S. Select Comm. on Intelligence (2018), https://www.intelligence.senate.gov/hearings/open-hearing-foreign-influence-operations%E2%80%99-use-social-media-platforms-company-witnesses.

[10] Brannon, supra note 4.

[11] Arisha Hatch, Big Tech companies cannot be trusted to self-regulate: We need Congress to act, TechCrunch (2021), https://social.techcrunch.com/2021/03/12/big-tech-companies-cannot-be-trusted-to-self-regulate-we-need-congress-to-act/.

[12] Zachery Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Relations (2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.

[13] David D. Luxton, Jennifer D. June & Jonathan M. Fairall, Social Media and Suicide: A Public Health Perspective, 102 Am J Public Health S195–S200 (2012).

[14] Kevin Roose, On Gab, an Extremist-Friendly Site, Pittsburgh Shooting Suspect Aired His Hatred in Full, The New York Times, October 28, 2018, https://www.nytimes.com/2018/10/28/us/gab-robert-bowers-pittsburgh-synagogue-shootings.html.

[15] Eileen Hershenov & David L. Sifry, Online Hate and Harassment: The American Experience 2021 38 (2021), https://www.adl.org/online-hate-2021.