Curtailing Bias in Facial Recognition Technology

By Carolyn Wimbly Martin and Kavya Rallabhandi

In the wake of injustice, you take to the streets with your peers and allies to exercise your constitutionally protected rights to free speech and protest. While marching, an argument breaks out nearby, and due to the heightened presence of law enforcement, you decide to head back home. The next morning, more than 50 police officers, some of them in riot gear, have shut down your street and are urging you to voluntarily surrender as police helicopters fly overhead. When you ask for a search or arrest warrant, the police refuse to show one and remain outside your building for six hours. This happened to 28-year-old Black Lives Matter (BLM) protestor Derrick Ingram, who was accused of assaulting a New York Police Department (NYPD) police officer during a June 2020 BLM protest. The NYPD has confirmed that they used facial recognition technology to identify Ingram.

Facial recognition technology has developed at an exponential rate and raises many legal and technical issues concerning individual privacy, data protection and low accuracy rates that perpetuate bias and discrimination. This Insight article explains how facial recognition technology works and discusses the laws regulating this technology. This article also addresses the technology’s accuracy issues and concerns about the unregulated use of facial recognition technology by law enforcement.

Defining Facial Recognition Technology

Facial recognition systems (FRS) are automated or semi-automated artificial intelligence technologies that are capable of identifying and authenticating individuals based on their physical facial features. The underlying machine learning technology can characterize an individual’s physical facial features and expressions from a photograph, video, or even in real-time. Industry standards do not dictate the development of FRS; therefore, different FRS algorithms can vary significantly with respect to accuracy and sophistication. In addition to a lack of standardization, there is a proven algorithmic bias in facial recognition against certain demographics. As a part of the Face Recognition Vendor Test Program, the National Institute of Standards and Technology (NIST) conducted a 2019 study that evaluated 189 facial recognition algorithms from 99 developers on the basis of race and sex identification accuracy. More than 18 million images of more than eight million people were pulled from databases provided by U.S. federal agencies like the U.S. Department of State, the Department of Homeland Security and the Federal Bureau of Investigation. NIST concluded that every developer’s FRS found disproportionately higher false positive rates for Asian, Native American, and African-American individuals. African-American women were found to have the highest risk of FRS misidentification.

Facial recognition technology that can identify anybody and everybody used to be considered socially taboo because of its erosion of personal privacy. Large tech companies, such as IBM, Amazon and Microsoft, traditionally refrained from publicly releasing their FRS because they feared the technology could be used in a harmful manner. In their place, smaller tech companies developed and distributed FRS tools that function by scraping billions of images from across the internet in order to identify any private individual. Facial recognition technology is used for a wide variety of purposes. In response to the COVID-19 pandemic, FRS has been utilized by public health agencies and private companies to curb the spread by identifying infected individuals and implementing contact tracing through geolocation tracking. FRS is also widely used by private companies and law enforcement for security reasons such as airport screenings and shoplifting prevention. The main criticism of FRS is the technology’s algorithmic bias with regard to demographic differences in race and sex. When companies and governments utilize these flawed technologies, it leads to exclusionary and discriminatory practices. Law enforcement in the United States has received backlash for their use of facial recognition methods during civil rights protests and other demonstrations. Since FRS has trouble accurately identifying persons of color, law enforcement’s adoption of facial recognition methods can exacerbate racially discriminatory policing. Concerns about FRS utilization raise important legal questions about ethics, constitutionality and effective regulation.

Adoption of Facial Recognition by Federal and Local Law Enforcement

For almost 20 years, police departments have had access to low-accuracy facial recognition databases that are limited to government-provided image searches like driver’s license photos, mug shots and juvenile booking photos. Current FRS technology is not limited to government-provided images because tech companies can scrape images of people’s faces from across the internet. The debate regarding law enforcement’s use of facial recognition technology has spiked as a result of the BLM George Floyd protests. The goals of the BLM movement are to promote police accountability and fight systemic racial oppression. Ironically, the facial recognition technologies that law enforcement uses to identify and arrest protestors have inherent race and sex biases that raise serious concerns as to accuracy.

Many Silicon Valley tech giants, including IBM, Amazon and Microsoft, stated that they would not permit law enforcement to use their FRS until federal laws comprehensively regulate FRS use. Smaller FRS developers like NEC, Idemia and Clearview AI have not joined this voluntary moratorium and are actively selling their facial recognition technologies to federal and local law enforcement agencies around the world. Clearview AI, which has contracted with more than 600 law enforcements agencies within the last year and has stored more than three billion images, is a leader in this industry. U.S. federal law enforcement, including the FBI and Department of Homeland Security, as well as countless local police forces are customers. Clearview AI’s FRS machine learning and programming language includes pairing code with augmented reality glasses – meaning that eventually users can access the name, address, profession and other personal information of anyone they see on the street; or in the police’s case, protesting at a rally.

Police use a company’s FRS app by uploading someone’s photo, which is then saved to the company’s servers. An individual can be identified if the photo is low-quality, if their face is covered by glasses or a hat and even if only the individual’s reflection is visible. Then the FRS algorithm compares the uploaded photo to the billions of images the company has scraped from employment, news and educational sites, as well as social media in order to find a match. In the 2020 BLM protests and the 2021 Capitol Insurrection, the FBI and multiple city police departments requested that the public share images and videos of the protestors with the intention of cross-referencing and matching the protestors’ faces with body camera footage and FRS databases. Law enforcement has been criticized for not being transparent about its FRS use. Specific concerns arose regarding bias and accuracy in FRS use during BLM protests because many of the BLM protest attendees were people of color. Additionally, critics are worried about police using advanced FRS to scan the internet and target dissenters that are simply exercising their constitutional rights to free speech and protest. In the case of the 28-year-old BLM protestor, Derrick Ingram (discussed above), the NYPD “Facial Identification Section Informational Lead Report” included a picture from Ingram’s Instagram. While NYPD’s established practices do not authorize the use of services such as Clearview AI and apparently limit facial recognition to still images from surveillance videos and lawfully possessed arrest photos, NYPD does not specifically prohibit the use of services such as Clearview AI. As of 2020, more than 30 NYPD officers had personal Clearview AI accounts.

Legal Regulations

There is no federal legislation in the United States regulating the use of FRS. In November 2019, Senator Christopher Coons introduced the Facial Recognition Technology Warrant Act, which mandates a warrant for the use of FRS in law enforcement surveillance. In February 2020, Senators Jeff Merkley and Cory Booker introduced the Ethical Use of Facial Recognition Act in the Senate. The bill would prohibit any officer, employee or contractor of a federal agency from utilizing facial recognition technology without a warrant. Specifically, the bill would prohibit any individual from setting up cameras to be used in connection with facial recognition technology, assessing or using information obtained through FRS, or importing FRS to identify any individual in the United States. The FRS moratorium established by this bill would remain in effect until a congressional commission passes regulations governing limitations on both government and commercial FRS use. In June 2020, Senators Ed Markey and Jeff Merkley, backed by House Representatives Ayanna Pressley and Pramila Jayapal, proposed the Facial Recognition and Biometric Technology Moratorium Act. This bill calls for the prohibition of facial recognition use by federal and state government entities unless an act of Congress permits specific FRS use and names the specific authorizing entity and auditing requirements relating to the FRS. According to the legislation, state or local governments utilizing FRS will not receive federal law enforcement grants. Also, in June 2020, Democratic lawmakers in the U.S. House of Representatives introduced the Justice in Policing Act, which would prohibit the real-time use of FRS on police body cameras.

The introduction of these federal bills follows pressure from international organizations,  and they are modeled after already-enacted FRS regulations on a state and city level. Internationally, the United National Human Rights Council has called for a moratorium on facial recognition technologies used to identify protestors because FRS can amplify discrimination against people of color and deter individuals from exercising their right to free speech. As early as 1981, the United Nations passed the Convention for the Protection of Individuals With Regard To Automatic Processing of Personal Data, which is the first and remains the only international instrument in the data protection field. Under the Convention, member-parties are required to pass domestic legislation regulating FRS use in order to protect the fundamental human rights of all individuals with regard to processing personal data. Amnesty International, a global non-governmental organization, launched the 2021 Ban the Scan campaign to protect human rights and combat the weaponization of facial recognition by law enforcement against marginalized communities. The campaign calls for the total ban on the use, development, production and sale of facial recognition technology for mass surveillance purposes by the police and other government agencies. Citing the incident between NYPD and BLM protestor Derrick Ingram, Amnesty International has focused their campaign launch in New York.

Despite a lack of federal FRS regulation, there are many U.S. state and city regulations that severely restrict or ban the use of FRS by law enforcement and commercial companies. By way of example, San Francisco was the first city to completely bar police from using facial recognition technology in 2019. The city ordinance also created an accountability process for the San Francisco police department to disclose what types of surveillance technologies that they use, such as geolocation trackers and license plate readers. Two other California cities, Oakland and Berkeley, followed in San Francisco’s footsteps. In 2019, Boston became the largest city on the East Coast to ban FRS for the purposes of identifying individuals. The ordinance passed unanimously, again due to concerns of racially discriminatory facial recognition technology that threatens basic rights. Boston’s FRS ban has some exceptions – city employees are permitted to use facial recognition to unlock their own devices and to automatically redact faces in images. Following widespread protests against police in 2020, New York Mayor Bill de Blasio signed the Police Oversight of Surveillance Technology Act, which requires the NYPD to disclose more information about its surveillance capabilities. In 2020, Washington became the first state in the country to pass a facial recognition bill that dictated how the government can and cannot use FRS. The bill was drafted and sponsored by State Senator Joe Nguyen, who interestingly is currently employed as a program manager at Microsoft. The bill received backlash from the public and the ACLU for failing to limit FRS sales to law enforcement or hold tech companies responsible for biased and inaccurate algorithmic outcomes.

Self-Help Measures for Individuals

Until comprehensive legislation regulating FRS use is passed, individuals need to protect themselves. Some recommendations include enabling full disk encryption for personal devices, installing an encrypted messenger app to communicate with friends and removing fingerprint or FaceID permissions on such devices. To secure data prior to attending protests, consider turning off all location tracking, muting all notifications, backing up devices so the phone can be wiped if needed, and writing down emergency contacts in an easily accessible place.

Conclusion

Facial recognition technology is a complicated and controversial topic. Against the backdrop of unregulated FRS, lack of accountability for tech companies, bias/accuracy issues and law enforcement’s questionable FRS utilization during protests – curtailing facial recognition technology is an imperative national issue. Lutzker & Lutzker will continue to provide updates on this critical issue as the technology and the law evolves.