Facial Recognition Technologies Reproduce Inequalities

DocumentaryCoded Bias” (2020*) highlights how racial and gender discrimination can multiply in technology-driven societies.

Outi Puukko and Minna Aslama Horowitz**

Artificial Intelligence (AI) and algorithms are the new gatekeepers in today’s society but while they seem neutral to us they entail many mechanisms of racial, gender, and other forms of oppression. This was the core message by Shalini Kantayya, the director of Coded Bias in the panel discussion about facial recognition technologies and race, organized by Aalto University on 25 March 2021.

Coded Bias is a documentary film about ongoing struggles against the development and use of discriminatory technologies. Coded Bias opens with a case made famous by the M.I.T. Media Lab computer scientist Joy Buolamwini: While innovating for a project, she realized that facial recognition technologies would not recognize her until she wore a white mask.

The related AI was trained with data that overrepresented white male faces. Since then Buolamwini has focused on technologies that discriminate and her research on “algorithmic accountability” has been received with wide acclaim.

The documentary provides a different view from the mainstream discourses on artificial intelligence. It centers the voices of women of color, researchers and activists, as they expose the discrimination within facial recognition technologies and other algorithmic-driven systems, and by doing so questions the dominance of white heterosexual men in the field of technology.

The documentary also calls for regulation and audited algorithms. That is, transparency of the ways in which the AI processes and uses data.

The broader message of Coded Bias pertains to social responsibility and responsiveness of technologies. How much can data that are more inclusive actually change the core purpose of facial recognition? We urgently need critical examinations of, and debates on, who uses our data and how. We need to ask to what extent collecting data, be for public service or commercial purposes, is acceptable and purposeful.

Can We Trust Authorities with Our Data?

Coded Bias brings up examples from the U.S., U.K., and China. Yet the thematics are equally relevant in Finland when authorities employ new technologies. For instance, this Spring, the police have announced the nationwide use of bodycams. According to the police, the development may result in the need to revise legislation especially from data protection perspective, and possibilities to use facial recognition technologies should be clarified.

One common argument for the use of bodycam is that it will make police work more transparent. At the same time, research has shown that surveillance is racially skewed and focuses on people of color. In the U.S., civil society actors have for years posited that use of technological tools such as bodycams by law enforcement increases mass surveillance of marginalized communities and thus support structural discrimination. Recently, non-governmental organizations have demanded various restrictions, and even bans, to facial recognition technologies. Another pertinent point is that bodycams depict only the viewpoint of the police.

Unregulated use of surveillance technologies is a concern for all. Coded Bias shows how activists of the so-called Big Brother Watch group raise awareness of automated camera surveillance by the Metropolitan Police in London.

Regulation In Spotlight Also in Europe

The last part of Coded Bias focuses on the 2019 hearing of the U.S. House Oversight Committee on Oversight and Reform, addressing the impact of facial recognition technologies on civil rights and liberties. In 2020, the Black Lives Matter movement continued to address the related problems and for instance, Microsoft halted all sales of its facial recognition products to the police.

At the same time, the EU is engaged in debates around AI. A civil society campaign, Reclaim Your Face, demands a total ban of biometric surveillance, including facial recognition, by both states and commercial actors. According to European Digital Rights (EDRi), over half of EU countries are using facial recognition in ways that conflict with human rights. In addition, the Council of Europe, a human rights body of 47 nations, has recently published guidelines for facial recognition.  

The Coded Bias documentary, and other debates around technological developments in general, prove that we can longer focus solely on the personal privacy aspects of data. As the Special Rapporteur to the UN Human Rights Council, E. Tendayi Achiume, reported in 2020, we can already detect different forms of racial discrimination in the design and use of emerging digital technologies. In some cases, this discrimination explicitly motivated by intolerance or prejudice. In other cases, it results from disparate impacts on groups according to their race, ethnicity, or national origin, even when an explicit intent is absent. And in yet other cases, direct and indirect forms of discrimination exist in combination and can have significant holistic or systemic effects on human rights.

We urgently need these kinds of debates also in the Finnish context. If we want digital democracy to be realized we need to ensure that technologies are examined and, when necessary, regulated, to provide a non-discriminatory digital environment for us all. At the same time, we must foster digital literacy as well as principles of design justice https://designjustice.org/read-the-principles, that is, fundamentally more inclusive and equality-promoting technological tools.

Fundamentally, as the author Safiya Umoja Noble reminds us in Algorithms of Oppression, now it is time to demand more from technologies and to commit to creating truly multicultural and multiracial democracies. The latter means that libraries, schools, universities, and other knowledge institutions need to join this work. Shalini Kantayya agrees: she stresses that we need brave researchers, but also citizens, to demystify AI, and to raise awareness about discriminatory practices of different technologies.

The screening of Coded Bias and a related panel discussion with the documentary maker Shalini Kantayya and other expertswas organized in March 2021 by the Color of Science initiative of Aalto University. * The documentary is now globally available on Netflix since April 2021.

Image: Codedbias.com/press.

**Outi Puukko is a doctoral candidate at the Faculty of Social Sciences of the University of Helsinki. Her PhD project focuses on civil society actors’ role in the digital rights discourse.

**Minna Aslama Horowitz is a Docent at the University of Helsinki, a Fellow at St. John’s University, New York, and the Expert on Advocacy and Digital Rights at the Central European University.