For a long time, different groups that care about people’s rights, technology experts, and people who work to bring about social change have warned about how facial recognition technology could worsen existing racial inequalities in law enforcement. As time has passed, these concerns are proving to be accurate and well-founded.
A recent example highlights this issue: Porcha Woodruff, a mother hailing from Detroit, Michigan, has come forward as the first woman to share her experience of being wrongly identified as a suspect by the police due to a flawed facial recognition system, as reported by The New York Times.
Now the sixth individual to make a false accusation through the use of facial recognition technology is a woman. Notably, all previous cases of such accusations were brought forward by individuals of Black ethnicity, as reported by the Times. Detroit Police Chief James E. White commented on the situation, expressing his apprehension after reviewing the claims presented in the lawsuit. He emphasized the seriousness with which they are approaching this issue.
Disproportionate Arrests and Bias in Facial Recognition Technology
In May, which was several months after Woodruff’s arrest in February, a research study conducted by criminal justice specialists Thaddeus L. Johnson and Nastasha N. Johnson revealed that the utilization of facial recognition technology by police departments results in a disproportionately higher number of arrests among Black individuals.
“We believe this results from factors that include the lack of Black faces in the algorithms’ training data sets, a belief that these programs are infallible and a tendency of officers’ own biases to magnify these issues,” stated the researchers in an article they authored for Scientific American.

The study also revealed that databases of mugshots tend to have a higher proportion of Black individuals, which in turn, has a negative impact on AI accuracy. As a result, AI systems are inclined to incorrectly label Black individuals as potential criminals, potentially resulting in unjust targeting and wrongful arrests of innocent members of the Black community. Numerous law enforcement agencies throughout the nation employ facial recognition technology to assist in identifying suspects during specific investigations.
Concerns Surrounding the Technology’s Deployment and Effectiveness
According to a report by Wired, Deborah Levi, a public defender from Maryland, revealed concerning statistics regarding the use of facial recognition technology by law enforcement agencies. In 2022, the Baltimore Police Department reportedly conducted almost 800 facial recognition searches, a significantly higher number compared to the Detroit Police Department, which carried out around 125 searches of this nature during the same period, as The New York Times reported.
Despite requests for clarification from Insider, the Baltimore Police Department refrained from commenting on these revelations before the article’s publication. This lack of transparency has raised further questions about the extent and appropriateness of facial recognition technology’s deployment.
In a related development, it was highlighted that the effectiveness of this technology is not without flaws. In 2020, the police chief of Detroit acknowledged that their facial recognition system had a failure rate of 96% when used in isolation, as previously reported by Insider. This alarming rate of inaccuracies has drawn criticism from civil liberties advocates, including Phil Mayor, a senior staff attorney at the American Civil Liberties Union of Michigan. Mayor pointed out that this practice has already led to multiple instances of false arrests, a concerning trend for individual rights and justice.
One notable case is that of Robert Williams, a resident of Detroit, who experienced a false arrest due to a flawed facial recognition match in 2020. Mayor is currently representing Williams, shedding light on the potential dangers and consequences of relying heavily on such technology within the criminal justice system.
Mayor told the Times, saying, “Shoddy technology makes shoddy investigations, and police assurances that they will conduct serious investigations do not ring true.”