Clearview company’s chief executive officer recently stated that a contentious facial recognition dataset used by law enforcement agencies all over the nation was constructed in part using 30 billion images that have been illegally scraped from Facebook as well as other social media users.
Critics have termed this practice as generating a “perpetual police line-up,” even for individual people who haven’t committed any crimes.
The firm, Clearview AI, talks openly of its capacity to prevent child exploitation and abuse, detect rioters during the January 6 assault on the Capitol, and aid in the wrongful conviction of those who were falsely charged with crimes.
Yet, opponents keep citing examples in Detroit and New Orleans where incorrect face detection and recognition identifications resulted in unjustified imprisonment.
In a recent conversation with the BBC, Clearview’s CEO Hoan Ton-That admitted that the company collected pictures without the users’ permission. This provided the foundation for the organization’s enormous collection, which is presented to police departments on its website as a resource “to bring justice to victims,” to grow fast.
Even though the links between police departments and Clearview AI are still unclear, Ton-That informed the BBC that US cops have visited the company’s facial recognition dataset around a million times since its formation in 2017. However, Insider was unable to validate Ton-That’s claim.
Insider’s inquiry regarding the matter was not immediately answered by Clearview AI representatives.
What happens when unauthorized scraping happens:
Security groups and digital platforms have always criticized the tech for being too invasive; in 2020, major social networking firms including Facebook sent cease-and-desist notices to Clearview for invading their users’ data.
“Clearview AI’s actions invade people’s privacy which is why we banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services,” a Meta representative quoted a declaration the firm made in April 2020, shortly after the initial revelation that the firm was harvesting user photographs and collaborating with law authorities, in an email to Insider.
Since then, the spokesperson told Insider, Meta has “made significant investments in technology” and devoted “substantial team resources to combating unauthorized scraping on Facebook products.”
When unauthorized scraping is detected, the company may take action “such as sending cease and desist letters, disabling accounts, filing lawsuits, or requesting assistance from hosting providers” to protect user data, the spokesperson said.
Despite internal procedures, biometric face copies are made and cross-referenced in the database once a picture has been managed to scrape by Clearview AI, permanently linking the people to their online profiles and other identifying information. Persons in the pictures have little recourse to attempt to eliminate themselves from the pictures.