Argentina will use AI to predict future crimes, aiming to enhance public safety by analyzing historical crime data. President Javier Milei, a far-right leader, established the Artificial Intelligence Applied to Security Unit this week. The unit will utilize machine-learning algorithms to analyze historical crime data, aiming to foresee and prevent future offenses. Additionally, the initiative includes facial recognition technology to identify wanted individuals, surveillance of social media, and real-time analysis of security camera footage for suspicious activities.
The Ministry of Security claims that the AI unit will enhance public safety by detecting potential threats, monitoring criminal group activities, and anticipating disturbances. They emphasize that the initiative will operate within existing legal frameworks, including compliance with the Personal Information Protection Act. The focus will be on applying AI, data analytics, and machine learning to uncover criminal patterns and trends.
Concerns from Human Rights Groups
Concerns have arisen as Argentina will use AI to predict future crimes, potentially infringing on citizens’ privacy and human rights. Human rights organizations have expressed alarm over the potential implications for civil liberties. Amnesty International warned that extensive surveillance could stifle freedom of expression, as individuals may censor themselves if they fear their online activities are being monitored. Mariela Belski, executive director of Amnesty International Argentina, emphasized that large-scale monitoring can deter people from sharing ideas or criticisms.
The Argentine Center for Studies on Freedom of Expression and Access to Information cautioned that these technologies could be misused to profile academics, journalists, politicians, and activists, thereby threatening privacy. Without proper oversight, there is concern that multiple security forces might misuse the collected data. Digital policy expert Natalia Zuazo criticized the initiative as “illegal intelligence” masked as modern technology, warning of insufficient controls over who accesses the information.
The plan has sparked a particularly strong reaction due to Argentina’s history of state repression. During the 1976-83 dictatorship, an estimated 30,000 people were forcibly disappeared, with many subjected to torture and extrajudicial killings. The introduction of an AI-powered surveillance system raises fears of a return to authoritarian practices.
AI’s Role in Crime Prediction
The initiative, in which Argentina will use AI to predict future crimes, involves monitoring social media and utilizing facial recognition technology. Experts have questioned the reliability and ethics of using AI to predict crimes. MartÃn Becerra, a professor and researcher in media and information technology, criticized the government’s approach as anti-liberal, highlighting concerns over the lack of transparency and increased state repression. He argued that AI’s track record in crime prediction is poor and should be cautiously approached.
The Argentine Observatory of Information Technology Law echoed these concerns, pointing out that comparative experiences cited by the government lack thorough analysis. They questioned whether security models from countries like China or India apply to Argentina’s context.
Challenges Ahead
The concept of predicting crimes, similar to the Philip K. Dick story that inspired the film “Minority Report,” raises ethical dilemmas. In the story, future criminals are apprehended before committing any act, leading to debates about their guilt. The story highlights the potential pitfalls of pre-emptive justice, a concern mirrored by critics of Argentina’s new AI initiative.
The use of AI to predict crimes is based on analyzing patterns from past data. However, this approach needs to be revised. AI systems are only as good as the data they are trained on, and if the data contains biases, the predictions will reflect those biases. This could lead to over-policing of certain communities, particularly marginalized groups, which could exacerbate existing social inequalities.
Experts argue that predicting individual criminal behavior is highly unreliable. The notion of stopping crimes before they happen, akin to the “Minority Report” scenario, raises ethical questions. Arresting or surveilling individuals based on predicted future actions, rather than actual criminal acts, challenges fundamental principles of justice, such as the presumption of innocence.
Also Read: EU Enforces World’s First Major AI Law: A New Era of Tech Regulation Begins!