Facial Recognition and Racial Bias
Back to BlogAI Bias + Criminal Justice

Facial Recognition and Racial Bias

In as many years, 3 Black men have had their lives upended by wrongful arrests. Robert Williams, Michael Oliver, and Nijeer Parks were misidentified by facial recognition software.

Dr. Dédé Tetsubayashi|8 min read

It is estimated that almost half of American adults—over 117 million people, as of 2016—have photos within a facial recognition network used by law enforcement. This participation occurs without consent, or even awareness, and is bolstered by a lack of legislative oversight. More disturbingly, however, the current implementation of these technologies involves significant racial bias, particularly against Black Americans.

In as many years, 3 Black men have had their lives upended by wrongful arrests. Robert Williams, Michael Oliver, and Nijeer Parks were misidentified by facial recognition software, arrested, and held under suspicion of crimes ranging from petty theft to assault of a police officer. For Parks, who was accused of the more serious crimes of assault and eluding the police, the fight to clear his name went on for the better part of a year. Before his case was thrown out of court, and his name cleared, Parks would go on to spend 10 days in jail, all due to hyper-reliance on technology.

In the later filed lawsuit against the Woodbridge Police Department, its affiliates and Idemia the company behind the facial recognition software, Parks alleged that proper investigative techniques were forgone in lieu of faulty technology.

The Technology's Systemic Failure

Despite widely published research findings detailing the issues of misidentification of darker skinned faces by facial recognition technologies, law enforcement's hyper-reliance remains. For BIPOC, and most notably, dark-skinned Black women (for whom misidentification occurs as often as 33% of the time compared to that of white men) this adds an added layer of vulnerability to an already over-policed population.

Facial recognition systems consistently perform worse on darker-skinned faces, particularly those of Black women. This isn't a bug—it's the predictable result of training data that underrepresents these groups and testing processes that don't adequately assess performance across demographics.

But the problem isn't just technical. It's the deployment of imperfect technology in high-stakes situations without adequate safeguards. It's treating algorithmic output as evidence rather than as one input among many. It's the assumption that technology is neutral even when the outcomes clearly are not.

The Path Forward

Some cities have banned facial recognition technology entirely. Others are implementing regulations around its use. But technical fixes and policy changes alone won't solve the problem. We need to fundamentally rethink how we deploy surveillance technology and who bears the cost of its failures.

The wrongful arrests of Robert Williams, Michael Oliver, and Nijeer Parks aren't isolated incidents—they're symptoms of a system that prioritizes technological efficiency over human rights. Until we address the underlying biases in both the technology and the institutions that deploy it, Black Americans will continue to bear the cost of algorithmic racism.

About Dr. Dédé Tetsubayashi

Dr. Dédé is a global advisor on AI governance, disability innovation, and inclusive technology strategy. She helps organizations navigate the intersection of AI regulation, accessibility, and responsible innovation.

Work With Dr. Dédé
Share this article:
Schedule a Consultation

Want more insights?

Explore more articles on AI governance, tech equity, and inclusive innovation.

Back to All Articles