History is full of ideas that were at some point considered heretical or deviant. The struggles for religious liberty, women’s rights, reproductive freedom, civil rights, LGBTQIA+ rights, and many other forms of progress were thwarted by restrictions on voicing what were once seen as dangerous ideas. For decades, laws prevented the dissemination of information about birth control; in 1929, reproductive freedom pioneer Margaret Sanger was arrested after giving a speech advocating women’s rights. Not until 1977 did the Supreme Court extend full legal protection to the ideas Sanger was advancing, ruling that the First Amendment prohibited bans on advertising for contraception. Free speech protections have been essential to ensuring that champions of once-revolutionary ideas could make their case.
When you bring up “free speech” to Americans, there’s a good chance that, in their response, they’ll use the words “First Amendment.” It’s almost a reflex. Yet many free speech conflicts lie outside the purview of constitutional law. When we consider why we value free speech—its truth-finding, democratic, and creative functions—it also becomes clear that the freedom to speak, narrowly construed, isn’t enough to guarantee these benefits. If harassment deters individuals from taking part in public debate; if disinformation drowns out truth; and if thinkers dismiss the possibility of reaching audiences of different views, free expression cedes its value. Free speech includes the right to persuade, to galvanize, to seek out truth alongside others, to reach new understandings, and to shape communities and societies. But these benefits can be enjoyed only in a climate that protects open discourse.
One of the most pitched free speech debates of the digital age centers on the degree to which online platforms should remove or hide offensive or harmful speech and bar its persistent purveyors from the platforms. With Google, Facebook, and Twitter holding dominions over vast swaths of public discourse, these arenas have become prime vehicles for messages, photos, and videos that bully, harass, spread inaccurate information, stoke hatred, extol violence, and advocate criminality. The global reach and viral potency that make the internet so compelling as a communication tool have weaponized speech in ways that were previously unimaginable. Technological advances—including the rise of the so-called deep-fake videos that aim to defame and mislead and are almost impossible to definitively discredit—hold the potential to further thwart trust in our discourse Figuring out how to strike a balance that sustains what is best about a free and open internet while mitigating its manifest harms has bedeviled Silicon Valley executives, regulators, scholars, and civil libertarians alike.
Pandemic misinformation causes avoidable deaths. Cyberbullying contributes to rising teenage suicide. The glorification of violence influences perpetrators of assaults and killings. Dangerous quackery, including anti-vaccination pseudoscience, has fueled public health crises. Targeted misinformation has skewed election outcomes, pulling the rug out from under democracy. These damaging digital side effects are now recognized as not just bugs in the system, but entrenched features of it. There is growing evidence that online platforms may structurally favor some of the most nefarious forms of content. Their algorithms are designed to select for content that users find most compelling, and it turns out users gravitate toward more intense and extreme messages.1
Cascading evidence that digital media has intensified the harms of speech has strengthened calls for social media companies to more aggressively moderate content on their platforms. But there are clear risks to empowering private, profit-driven companies to exert untrammeled control over the huge proportion of our public discourse under their purview. Many of the fears we associate with government controls over speech—that dissent will be suppressed, that the open exchange of ideas will shrivel or skew, and that powers over speech will be abused to benefit those that wield them—are as applicable to conglomerates as they are to a national government. While a tech company doesn’t have the power to arrest and prosecute you, their ability to delete your posts and shut down your accounts is a potent form of social control.
One of the biggest obstacles to taming the negative impact of Big Tech on our discourse is that so much of their decision making occurs in secret. Perhaps the most far-reaching, elusive facet of content moderation occurs passively through algorithmic amplification of content that elicits the most user activity. Many analysts have argued that white supremacist, misogynist, and politically polarizing content has surged in the digital era because of the way algorithms are calibrated to serve us content we are most likely to view and share. The platforms themselves don’t actively promote specific content but instead allow the algorithms to work their will, prioritizing content only by click- and share-worthiness.
Platforms are also honing algorithms and artificial intelligence to be able to screen impermissible content without human intervention. But machines can’t always be trusted to make nuanced and consequential distinctions. In one instance, YouTube removed a video channel tied to California State University, San Bernardino’s Center for the Study of Hate and Extremism—a channel that was educating users about bigotry, not promoting it.2
Increasingly, instead of removing entirely content that skirts platform rules, the companies demote problematic posts, limiting how often they are seen but without excising them entirely. While perhaps preferable to out-and-out deletion, this system creates a shadowy realm of quasi-censorship that is almost invisible to users. Websites may observe sharp reductions in traffic due to tweaks by Google, Facebook, or other referring sites, but usually cannot find out why their content was downgraded nor what they can do about it. Most ordinary users have no way to know whether a tweet or post failed to get traction because it was simply unexciting or because—unbeknownst to you—it was demoted and never saw the light of day.
Meaningful Accountability
While platforms suppress particular types of content (however imperfectly), it is not clear that they have done enough to address the algorithmic propensity to prioritize content that hits sensitive societal nerves. This most thorny and important aspect of content moderation needs to be opened up to far greater scrutiny and debate over the values that inform how algorithms prioritize. Platforms must allow researchers to probe how content moves and escalates across populations, how it correlates with offline actions, and how well countermeasures—including downgrading, fact checking, and algorithmic adjustments—work to counteract it.
Mandated transparency is one area where government regulation of online content may be a positive step and would not entail intrusions on content in violation of the First Amendment. Past practice suggests that the only way to get companies to provide meaningful transparency may be to require it by law.
To better guarantee that an evolving internet continues to respect freedom of speech, internet companies and civil society organizations should come together to ensure that, as companies take responsibility for cleaning up their platforms, expressive rights remain intact. With a reliable, universally accessible, and publicly accountable system to ensure that erroneous content removals could be quickly reversed, the prospect of companies becoming more aggressive with removals would be less worrisome. We would have a fail-safe to address the inevitable false positives quickly enough so that the impairment to free speech resulting from content moderation would be minimized (though, admittedly, not eliminated).
While the initial focus of the service would relate to claims of content unfairly removed or suppressed, it might eventually also address claims of individuals who believe that certain content (for example, nude pictures or a defamatory message) should be removed for violating companies’ terms of service or local law. It could augment current “flagging” systems by offering expert assistance to mount more complex claims and by ensuring that such claims can be tracked. But the main initial purpose of such a service would be to mitigate the risk that more assertive content moderation strategies—demanded to curtail harmful speech in particular categories—avoid encroaching upon legitimate content. By empowering users with expert assistance to challenge contestable determinations, such a service would balance out considerations of mitigating harm with the importance of preserving freedom of expression. By operating independently and transparently, the service would also provide a check on the unfettered power and discretion of the companies. Ultimately, if the companies are to balance mitigating the worst harms of online content and unduly impairing free expression, it will be because users, citizens, and civil society groups pushed them to do so.
The case in favor of free speech involves affirmative steps to make sure all individuals and groups have the means and opportunity to be heard. If free speech matters, we need to ask not only whether the government is respecting it, but whether individuals feel able to exercise it in daily life. To unleash both the individual and the collective benefits of free speech requires the creation of an enabling environment for a broad array of speech and a public discourse open to all.
Suzanne Nossel is the chief executive officer of PEN America, the leading human rights and free expression organization. Previously, she served as the chief operating officer of Human Rights Watch and as executive director of Amnesty International USA; she also held high-level positions in the Obama and Clinton administrations. This article is an excerpt from DARE TO SPEAK Defending Free Speech for All. Copyright © 2020 by Suzanne Nossel. Used with permission by Dey Street Books. All rights reserved.
1. C. Lane, “Flaws in the Algo: How Social Media Fuel Political Extremism,” Psychology Today, February 9, 2018.
2. S. Hussain and S. Masunaga, “YouTube’s purge of white supremacist videos also hits anti-racism channels,” San Francisco Chronicle, June 7, 2019.
3. J. M. Balkin, “Free Speech Is a Triangle,” Columbia Law Review 118, no. 7 (2018): 2011-56.
4. B. Amerige, “Facebook Has a Right to Block ‘Hate Speech’—But Here’s Why It Shouldn’t,” Quillette, February 7, 2019.
5. S. Van Zuylen-Wood, “‘Men Are Scum’: Inside Facebook’s War on Hate Speech,” Vanity Fair, February 26, 2019.
[Illustrations by Lucy Naland, photos: Getty Images]