Every day millions of people unlock their phones with facial recognition software – but what happens when that software fails? For many black, brown, Asian, Indigenous, and other racialized people, this isn’t just a rare inconvenience but a frequent reminder of this software’s systemic bias towards white people and white faces.

What is facial recognition technology?

Facial recognition technology (FRT) is a software that uses algorithms to run pictures of your face against a database of faces to ID you. This technology can be put to an increasing number of uses, including automatically unlocking your phone, letting a person into a building, identifying the sex or race of a person, or determining if the face matches a mugshot. As more systems become automated and digitized, the usage of FRT continues to increase and impact more areas of our everyday lives.

The problem is that no FRT system is perfectly accurate. And while that’s less of an issue when it comes to a locked phone, it becomes a major problem when these systems used by the government and police to track and identify criminal or terrorist suspects fail.

How are FRT systems systemically racist?

Unfortunately, systemic racism is built into these FRT systems. They’re awful at identifying black, brown, Indigenous, Asian, and other racialized people, meaning they make a ton of life-changing mistakes — for example, labelling innocent racialized people as criminals or terrorists!

“In a study by the National Institute of Standards and Technology, FRT was more likely to flag a false positive match for Black people, especially Black women. If that threat sounds theoretical, think again: in Michigan, police facial recognition technology recently wrongly ID’d Robert Julian-Borchak Williams, a Black man, as a suspect in a shoplifting case, leading to his wrongful arrest.

Considering just how many of our own Canadian police departments have started using FRT so far, it’s only a matter of time before similar stories rise to the surface here in Canada.

For racialized folks, false positives are more than a mistake: they’re incredibly dangerous. It’s no secret that police often use violent force against Black and Indigenous people who are minding their own business. It happened in B.C. to Irene Joseph, a Wet’suwet’en elder who was hurt during a violent wrongful arrest for shoplifting.”[1]

The bias and inaccuracy such research reveals comes down to how these tools are developed. Algorithms “learn” to identify a face after being shown millions of pictures of human faces. However, if the faces used to train the algorithm are predominantly white men, the system will have a harder time recognizing anyone who doesn’t fit.

Joy Buolamwini, a leading researcher on algorithmic bias, found this out the hard way as an undergraduate in computer science. One of her assignments required interacting with a robot equipped with computer vision, but the robot was unable to “see” her. She later found that a computer camera did not recognize her face — until she put on a white mask.”[2]

This kind of systemic, built-in bias is actually quite intuitive when you think about who these FRT systems are created by. The vast majority of them are created by white men software developers living in Silicon Valley. The software development field is overwhelmingly dominated by white men and not representative of black, Indigenous, and other racialized communities. When you don’t have a diversity of perspectives and inputs into a system, you don’t get diverse, representative outcomes.

Why are these systems allowed to be racist?

While FRT systems are beginning to be used on a large scale by police, corporations, and other agencies in Canada, there are currently no laws meaningfully regulating it or other biometric technologies. The government has very little oversight over this technology, and hasn’t set out any rules to protect racialized people against discriminatory treatment by these systems. Without any government oversight and accountability, it’s hard to know the true scope of FRT’s use and the potential harm its inflicting on black, brown, Indigenous, Asian, and other communities.

How can I help?

Canada needs solid laws that govern the use of facial recognition technology to protect our privacy and ensure we are all treated fairly by police and government using these systems. Click the button below to visit OpenMedia’s website to sign the petition to pressure the federal government to stop the police’s secret use of facial recognition software in Canada.

Click here to sign the petition!


For more information contact:

Mackenzie Francoeur

Vice President Equity

Dylan Robinson

Equity Coordinator


[1] Knight, E. (2020, July 6). Facial recognition technology is one of the most racist weapons in the police arsenal. Retrieved from https://rabble.ca/columnists/2020/07/facial-recognition-technology-one-most-racist-weapons-police-arsenal

[2] Ivanova, I. (2020, June 12). Why face-recognition technology has a bias problem. Retrieved from https://www.cbsnews.com/news/facial-recognition-systems-racism-protests-police-bias/