Social Media and Political Speech: The President’s Executive Order

By Jeannette Maurer Carmadella and Nick Feldstern

The debate over social media’s role in moderating user content, specifically political speech, heated up last month when Twitter started added fact checking and warning labels for the first time to a number of President Trump’s tweets. Traditionally, social media giants like Twitter and Facebook remain neutral, only intervening, and moderating user content when it directly violates the company’s terms of service. These private companies take an even more hands-off approach when it comes to speech by politicians and world leaders. However, in recent days Twitter has taken action by hiding and adding warning labels to a number of the President’s tweets, finding the President’s speech either violated the company’s rules against voter suppression, inciting violence, or, recently, sharing manipulated media.

In response, the President threatened all social media companies, claiming Twitter’s labels were “stifling free speech.” Then, on May 28, 2020, the President signed Executive Order on Preventing Online Censorship (the “Executive Order”) to lay the foundation for federal oversight of political speech on social media platforms by targeting Section 230 of the 1996 Communications Decency Act (CDA). The Justice Department followed up by releasing recommendations that call on Congress to repeal the portions of the CDA that shield online service providers (OSPs), like Twitter and Facebook, from liability for harmful content posted by its users. This 20-year old law has been lauded as the bedrock of the modern Internet, and many recognize the Executive Order and the Justice Department’s recommendations as a threat to the future of online free speech.

Section 230 says, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). In other words, Section 230 grants civil immunity for OSPs for defamatory or otherwise tortious conduct posted by their users. For example, although a newspaper can be held liable for publishing the defamatory content of one of its editors, Twitter cannot be held liable for the defamatory tweet of one of its users.

Another, often overlooked, aspect of Section 230 is its grant of civil immunity for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”  47 U.S.C. §230(c)(2)(A). Thus, OSPs are encouraged to self-moderate, and most tech companies, including Twitter and Facebook, have incorporated community standards for moderation in their terms of service. Facebook’s Community Standards and The Twitter Rules are similar in allowing the platforms to moderate content that they determine violates their standards. Facebook’s Community Standards state:

The goal of our Community Standards has always been to create a place for expression and give people a voice... We want people to be able to talk openly about the issues that matter to them, even if some may disagree or find them objectionable. In some cases, we allow content which would otherwise go against our Community Standards – if it is newsworthy and in the public interest. We do this only after weighing the public interest value against the risk of harm and we look to international human rights standards to make these judgments.

The Twitter Rules state:

Twitter’s purpose is to serve the public conversation. Violence, harassment and other similar types of behavior discourage people from expressing themselves, and ultimately diminish the value of global public conversation. Our rules are to ensure all people can participate in the public conversation freely and safely.

It is this immunity and these community standards that critics of Section 230 believe give social media companies both too much control and not enough responsibility over the content on their platforms.

Section 230(c)(1) grants an OSP immunity for the hosting of harmful third-party content, while Section 230(c)(2) grants immunity for an OSP’s efforts to restrict access to harmful content. Each subsection requires certain elements be met in order for Section 230 immunity to apply.

In determining whether an OSP is entitled to immunity under Section 230(c)(1), courts typically ask whether the legal claim treats the OSP as a publisher and whether the disputed content originated from a third-party. If the court answers in the affirmative, the OSP is likely entitled to the liability shield. In a recent example, the Second Circuit Court of Appeals held that Section 230(c)(1) shielded an OSP from civil liability for hosting a platform used by a terrorist organization to encourage violent attacks. In Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019), plaintiffs representing those harmed by Hamas attacks claimed Facebook aided and abetted the terrorist organization by providing a forum and “actively bringing Hamas’ message to interested parties” by use of its algorithms. The court found that because Facebook acted as a publisher when it exercised its discretion not to remove the content, Facebook was entitled to protection under Section 230(c)(1). The Court also concluded that Facebook was not a content provider because its algorithm did not “materially contribute to what made the content itself unlawful.”

For Section 230(c)(2), courts consider whether an OSP’s restriction of content was “taken in good faith” and whether it considered the content harmful or otherwise objectionable. Because of the high burden of proving lack of good faith, the lion’s share of Section 230(c)(2) claims concern whether the OSP considered the content objectionable. However, courts generally agree that blocking objectional content is a highly subjective standard and have afforded OSPs broad discretion in moderating their platforms. This approach has allowed OSPs to moderate as much or as little as they want to. Courts do recognize, however, that while the shield is broad, it is not unlimited, and many judges have sought to narrow Section 230 protection.

The “good faith” requirement is notably absent from Section 230(c)(1), and courts have explicitly stated there is no “good faith” element required for immunity as a publisher. See Levitt v. Yelp! Inc., No. C-10-1321 EMC, 2011 U.S. Dist. LEXIS 124082, at *24 (N.D. Cal. Oct. 26, 2011).  In fact, OSPs have even been entitled to immunity under Section 230(c)(1) for exercising publishing functions in apparent bad faith. See Zeran v. Am. Online, Inc., 129 F.3d 327, 331–33 (4th Cir. 1997).

“Good faith” only comes into play when an OSP decides to restrict or moderate content under Section 230(c)(2).

The President has repeatedly questioned whether social media moderation is implemented in good faith, and it was Twitter’s exercise of this “good faith” discretion that led to the issuance of the Executive Order. In terms of its application, the Executive Order directs the Secretary of Commerce to act through the National Telecommunications and Information Administration (NTIA) to petition for rulemaking with the Federal Communications Commission (FCC) to propose regulations to narrow Section 230 immunity. If that sounds confusing, it’s because it is. From a separation of powers perspective, the order is a quagmire of conflicting constitutional directives. Broadly speaking, the President does not have the authority to amend or clarify Section 230, as that is a power that belongs to Congress and the courts. Second, the President does not have authority to command FCC action because it is an independent agency of the United States government. The White House itself recognized this limitation and bypassed the legal hurdle by asking the NTIA to ask the FCC to initiate rulemaking. And third, the FCC has limited authority to rewrite or enforce Section 230 or to preside over OSPs. But, based on recent comments, FCC Chairman Ajit Pai appears to be open to the possibility.

In addition to the constitutional issues raised by the President’s challenge to Twitter, critics have also attacked the Executive Order as an attempt to chill the exercise of constitutionally protected speech by OSPs and their users. On June 2, 2020, the Center for Democracy and Technology (CDT), a tech advocacy group, sued the President over the Executive Order, claiming two First Amendment violations. CDT first argues that the order is a “plainly retaliatory” response to Twitter’s exercise of its First Amendment right to comment on the President’s statements. Second, CDT argues the “[o]rder seeks to curtail and chill the constitutionally protected speech of all online platforms and individuals—by demonstrating the willingness to use government authority to retaliate against those who criticize the government.”

Although the Executive Order will have a steep climb, it demonstrates the President’s willingness to crack down on perceived bias on social media, regardless of free speech implications, and critics’  position that social media platforms exercise too much control over user content. From a legal perspective, Twitter’s fact-checking and the efforts of other social media platforms to prevent harmful conduct are protected from government interference by the First Amendment. Section 230 provides OSPs with civil immunity for the harmful speech of their users, encouraging a more hands-off approach to content moderation. Any actions to regulate social media would constitute a fundamental restructuring of the First Amendment’s relationship with the Internet.

Ironically, President Trump should be careful what he wishes for. If the Executive Order or legislative proposals restructuring Section 230 were to come to fruition, the outcome might have the opposite effect to the President’s wishes. If social media companies were forced to exercise more control over their platforms in order to avoid civil liability,  platforms like Twitter with more than 300 million active users, would find any level of meaningful moderation nearly impossible. Thus, narrowing of Section 230 could lead to wholesale intrusion by social media companies over their own platforms. The result could fundamentally change Twitter, one of the President’s most valuable communication tools.

Absent Section 230, OSPs would only be able to avoid publisher liability by dismantling their content moderation schemes. For example, if Facebook decided to forego any kind of moderation, then it would not be considered a publisher, but a distributor. Distributors cannot be held liable for defamatory speech unless they know or have reason to know of its defamatory nature. While some, including the President, might see this as a preferable alternative, others would be rightfully concerned about the proliferation of harmful content and misinformation.

Whichever way the pendulum swings, the President’s attack on Section 230 is reverberating throughout the tech world and will further embroil these outlets in content debates in a nation that has become dependent on them as a platform for speech.