Deepfakes: The New Frontier of Disinformation

By Carolyn Wimbly Martin and Ethan Barr

In June 2019, Facebook founder and CEO Mark Zuckerberg appeared to criticize himself in a video posted to Instagram: “One man with total control of billions of people’s stolen data—all their secrets, their lives, their futures…” It was a moving image of Zuckerberg, except that the audio sounded like an unconvincing impression, and the individual posting the video was not Zuckerberg, let alone a representative of Facebook. The account was run by conceptual artist Bill Posters and a collective of other avant-garde artists as part of a larger project titled “Spectre,” which placed the spotlight on big data. These artists used artificial intelligence (AI) to produce what are known as deepfakes, which use neural networks to alter faces in existing images and videos to appear as other individuals.

Deepfakes may be exploited in the entertainment industry to create more lifelike animations or give life to deceased individuals. For example, Disney is using it to reanimate James Dean, who passed away in 1955, into a new Vietnam War film. The Massachusetts Institute of Technology also recently released a deepfake depicting Richard Nixon’s speech about Apollo 11, had the mission failed miserably. However, many have seen the development of deepfakes as overwhelmingly negative. At their core, deepfakes gravely risk the spread of disinformation, which has been at the forefront of legal and political discourse. With a critical election season on the horizon and a population largely stuck at home perusing the Internet, limiting the proliferation and advancement of deepfakes is of utmost concern. While some platforms have begun using verification and moderation systems, they remain the minority at this time.

Legislation has been introduced to curb the use of deepfakes, but it has made little to no progress. Some experts have opined that AI is advancing at such an exponential rate that the law and detection technology may never catch up. Although solutions are not clear, it is important to have a basic knowledge about deepfake creation, potential legal remedies, their shortcomings and proposed legislation.

Deepfake Creation

Deepfakes, named by an anonymous Reddit user who began sharing AI computer code in 2017, are created through machine learning, an AI process by which computers program themselves without intervention. Many of these machines use autoencoders, which are essentially networks that automatically learn patterns of code and develop ways to recreate such patterns without human supervision. There are two systems that work in tandem to produce deepfake videos, a generator and discriminator. Generators are neural networks that create falsified content in the first place. Discriminators are second neural networks that scrutinize each frame to determine if videos are fake and, if so, “teach” the generators how to be more effective in their deception. Together, these systems form a generative adversarial network (GAN), which can be used for 3D printing, video game modifications, scientific models and deepfakes.

This technology may seem complex and reserved for only the most astute programmers, but toolkits, including some free open-source code, are widely available for both laypeople and hobbyists with basic coding knowledge. Some of these unsophisticated deepfakes may be more easily detectable. According to the US Government Accountability Office (GAO), inconsistent eye blinking, asymmetrical jewelry and undefined facial features are characteristics that allow users to spot a basic deepfake quite quickly.

Regardless, less discerning users may not be able to detect even the simplest of deepfakes. Imagine a fake video of a politician punching someone in the face. Although the politician might be able to cognitively separate the fake elements from reality, her eight-year-old son may stumble upon the video and be unable to detect the technological manipulation. The emotional impact could be staggering. Coupled with AI-generated audio that could sound like the mimicked individual, even the most observant users may have difficulty detecting fraud. The implications of and solutions to widespread deepfake usage span multiple legal theories.

Copyright Law

Copyright law initially appears to be the appropriate avenue to regulate deepfakes. Many experts have suggested using online providers like Google and YouTube to police them. Users can use Digital Millennium Copyright Act (DMCA) takedown notices as a weapon, notifying online providers of doctored content that may be damaging or even defamatory to an individual. For example, in 2019, multiple online providers removed an altered video that depicted an inebriated Nancy Pelosi while giving a speech.

But not all providers followed suit. Facebook kept the video on display and failed to ban deepfakes until January 2020. Other platforms may simply place a warning on such content rather than removing it. There are very few laws expressly covering non-pornographic deepfakes, and each platform may regulate them differently, which creates more uncertainty and fails to limit the proliferation of damaging content.

From a practical viewpoint, there is a significant problem of authentication. While some platforms use algorithms to detect spam content or copyright infringement in general, that may prove difficult for deepfakes. Due to some hyper-realistic content, algorithms may not determine what is altered with any degree of certainty. As a result, host platforms cannot effectively monitor these video clips.

More importantly, some individuals may be unable to claim copyright infringement because they do not own the videos of themselves. For instance, many news outlets own the copyright to videos of politicians and celebrity figures that may be vulnerable to alteration. Accordingly, the individuals portrayed would be unable to bring a copyright infringement claim for those clips.

Even if someone owns the copyright of such a video, there is a distinct possibility that the creator of the deepfake could viably defend its content and prevail under the four-part fair use exception. (For a discussion of this exception, see our earlier blog.) Many deepfakes will either involve critiques or tabloidesque commentary about political figures, celebrities or other individuals. These may constitute parody or amount to transformative use under the first fair use factor. 17 U.S.C. § 107(1); Yang v. Mic Network, Inc., 405 F. Supp. 3d 537, 545 (S.D.N.Y. 2019); Weinberg v. Dirty World, LLC, No. CV 16-9179-GW(PJWx), 2017 U.S. Dist. LEXIS 221759, at *20–21 (C.D. Cal. July 27, 2017).

As for the “nature of the copyrighted work,” deepfakes derived from creative copyrighted works would be less likely to qualify as fair use, but those using informational news as a base may be more protected. 17 U.S.C. § 107(2). Similarly, the third factor would favor fair use for deepfakes that use less substance or the “heart” of the copyrighted videos, but again, this is not determinative. 17 U.S.C. § 107(3).

The fourth factor in a fair use analysis, the “effect of the use upon the potential market for or value of the copyrighted work,” tends not to favor fair use of deepfakes. Per the Nancy Pelosi example, although people may be more inclined to watch the initial video for verification purposes, it still devalues that footage. It is not a viable defense to say that direct infringement is doing the news media outlet a favor by drawing viewers to the original content.

All in all, there is no theory or area of copyright law that can definitively mitigate the spread of disinformation through deepfakes.

Other Avenues

Some experts have advocated that the right of publicity should control deepfakes. Upon first glance, it appears that this is more appropriate because recovery is not dependent on ownership of the copyrighted video, and individuals would be able to sue for unlawful appropriation of their name, image or likeness. Even the “newsworthiness exception,” which in many jurisdictions provides that matters of public interest are not subject to right of publicity protection, may not be satisfied by deepfakes that are “so infected with fiction.” Messenger v. Gruner + Jahr Printing & Publ'g, 94 N.Y.2d 436, 446, 706 N.Y.S.2d 52, 58, 727 N.E.2d 549, 555 (2000).

However, there are concerns that the decentralized regulation would lead to inconsistent decisions because not every state recognizes the right of publicity. Moreover, the right leads to recovery only when the individual’s name, image or likeness is appropriated for commercial purposes, which is pointless when deepfakes are made as political commentary or critique. The Supreme Court of California has gone so far as saying, “the right of publicity cannot, consistent with the First Amendment, be a right to control the celebrity's image by censoring disagreeable portrayals.” Comedy III Prods., Inc. v. Gary Saderup, Inc., 25 Cal. 4th 387, 403, 106 Cal. Rptr. 2d 126, 139, 21 P.3d 797, 807 (2001). The right of publicity does not ensure that all deepfake disinformation will be quelled because even the most outrageously disagreeable content may be seen as mere criticism.

The tort of defamation is also an appealing approach. The first element of a defamation claim in the Restatement of Torts § 558, and in many states, is that the “statement” is false, which seems quite appropriate for videos quite literally dubbed “fakes.” Two other elements of “unprivileged publication” and “actionability…or special harm” would not be difficult to prove, considering most deepfakes are posted without permission of the individual, and reputational harm may be obvious. Rest. (Second) of Torts § 558.

But the third element of “fault amounting to at least negligence,” construed by the Supreme Court in New York Times v. Sullivan to mean “actual malice,” would set a high bar for plaintiffs in hypothetical deepfake cases. 376 U.S. 254 (1964). Currently, it would not be difficult to prove that a technologically savvy individual created a fake video with the intention of deceiving viewers. Regardless, they may completely circumvent liability by adding a watermark or warning, however subtle, indicating that the video is falsified. Furthermore, as technology advances at an exponential rate, deepfakes may be produced more easily or even accidentally.

Much like the right of publicity, a claim of defamation would also come down to the jurisdiction and determination of a trier of fact, not to mention having to overcome free speech arguments from some perpetrators of deepfakes. Although it may lead to recovery for some individuals, it is not a sufficiently robust paradigm to provide sweeping reform against such harmful content or provide relief in a timely fashion. Once the damage is done, such litigation may only provide a moral victory.

Proposed Legislation and Reform

Rather than using online platforms to fight the spread of deepfakes and other disinformation, the alternative is policing the platforms themselves. Section 230 of the Communications Decency Act (CDA) currently immunizes online service providers (OSPs) from civil liability with safe harbor provisions (but does not shield them from criminal violations). Many creators of deepfakes are difficult to locate behind the curtains of social media platforms with pseudonyms and cryptic usernames and, as a result of OSP immunity, victims of deepfakes are robbed of their path to recovery. Despite exceptions for copyright infringement and federal crimes, it is unclear if either would overcome § 230. As a result, some critics have suggested amending § 230 to include liability specifically for deepfakes.

President Trump’s most recent Executive Order, requesting that Congress repeal sections of the CDA that grant immunity to OSPs, sweeps far too broadly. The reform proposed by this order would increase self-censorship among online platforms, but it would also chill free speech granted by the First Amendment, especially if OSPs decide to protect themselves by excessively regulating user content. Therefore, a carve-out that targets deepfakes appears more appropriate.

Despite this proposal, there has been some federal legislative activity. In December 2019, the National Defense Authorization Act for Fiscal Year 2020 (NDAA), the first federal law governing deepfakes, was signed into law. Some specific provisions focus on elections and exploring the future of technology. Section 5709 of the NDAA requires the Director of National Intelligence (DNI) to submit an annual report on international threats posed by deepfakes, including but not limited to information about the technical advancements of China and Russia, deepfake detection technology and illicit government use of deepfakes. It also requires the DNI to alert Congress every time deepfakes are used to tamper with elections or “domestic political processes,” and notify the Congressional Intelligence Committees “each time” the DNI determines there is credible intelligence that a foreign entity has or is deploying deepfakes “aimed at the elections or domestic political processes of the United States.” The DNI must also notify Congress if the disinformation campaign can be attributed to a foreign government, entity or individual. While these are a step in the right direction, they myopically center on international threats.

Other proposed legislation is in limbo. For example, the Deepfake Report Act of 2019, adopted by the Senate in October 2019, is currently awaiting approval from the House of Representatives. The Act would force the Department of Homeland Security to publish reports every five years on any deepfake content threatening fraud, civil rights violations and other harmful illegal activity. Although this acknowledges the legitimate harm caused by deepfakes, AI technology will outpace the allotted time to compile reports which, after five years, would almost definitely be useless.

Similarly, the Identifying Outputs of Generative Adversarial Networks (IOGAN) Act was passed by the House of Representatives in December 2019 but still requires Senate approval. The IOGAN Act encourages research on deepfake behavior and authenticity and, more importantly, looks to establish standards for examining deepfakes. While the AI used to create deepfakes is well-researched, this Act acknowledges that there should be an ongoing discussion as technology continues to advance quicker than the law. But the effectiveness of these hypothetical standards remains to be seen.

Finally, the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject (DEEP FAKES) to Accountability Act proposes that all deepfakes be accompanied by a watermark warning. However, this would provide very little relief to an individual victimized by a fraudulent video nor is it widely enforceable.

As for state laws, California and Virginia have enacted laws that impose civil and criminal liability, respectively, for the creation and distribution of pornographic deepfakes. California, along with Texas, also provides an avenue for lawsuits against perpetrators of deepfakes maliciously used to influence elections. Some states have suggested even broader laws regulating deepfakes. Massachusetts has proposed legislation that would make it illegal to use deepfakes to aid in the commission of another crime. New York has even introduced a bill that would allow a family to exercise control over a deceased relative’s likeness. Many other states have begun to follow suit in an effort to get ahead of deepfakes before they can do more damage.

In summary

With the 2020 election season looming, many social media and video platforms have begun to crack down on deepfakes. Nevertheless, the impact of such fraudulent content extends beyond the OSPs that host them and, once a deepfake reaches a wide audience, the damage has already been done.

As mentioned previously, state government officials have been wary of the harm caused by deepfakes, as the legislation passed in Texas, California and Virginia was met with little to no opposition. Despite some experts criticizing these laws and proposed federal legislation for being overbroad, the negative uses and effects of deepfakes are a more urgent part of the overall conversation.

While the existing framework of intellectual property, privacy and constitutional law may provide some answers in the interim, they do not provide long-term solutions. There are some grey areas like using deepfakes to reenact James Dean in modern film, but most people would agree that certain arenas like pornography and politics should be free of deepfakes, which is manifest in some state laws. However, if Congress continues to sleep on federal legislation, there will continue to be lasting damage.

If you have a potential deepfake issue, L&L stands ready to use all available tools to help.