Brazil’s data protection agency, ANPD, has banned Meta (Facebook and Instagram) from using Brazilian patron data to train its AI systems. This follows similar actions against Meta by the EU and reflects growing worries about user privacy as artificial intelligence advances.
Privacy Concerns and Potential Fines from ANPD
The ANPD’s decision was prompted by a revision Meta made to its privacy policy in May 2024. This upgrade permitted Meta to use public data from Brazilian patrons on Facebook, Messenger, and Instagram, including posts, pictures, and captions, for AI training purposes. The ANPD expressed apprehension about the potential risks to patron privacy.
In a statement published in Brazil’s official gazette, the ANPD stated that Meta’s policy could cause “serious and irreversible harm, or harm that is difficult to repair, to the fundamental rights” of Brazilians. This worry is heightened by the vast number of Meta users in Brazil – over 102 million Facebook accounts, according to ANPD’s data.
Meta has been given five days to adhere to with the order. Failure to do so will result in daily fines of 50,000 reais (about $8,808).
Meta Defends Its Policy, Questions Innovation Impact
Meta has defended its updated policy, contending that it adheres with Brazilian privacy laws and regulations. The company argues that the ANPD’s decision hinders innovation, competition in AI development, and delays the benefits of AI for people in Brazil.
Meta offers an opt-out option for patrons who do not want their data used for AI training. However, the ANPD asserts that this opt-out process is encumbered with “unreasonable and unnecessary obstacles,” making it difficult for patrons to manage their data.
Earlier this year, Meta faced similar opposition from EU regulators, who forced the company to temporarily stop training AI models using data from European Facebook and Instagram users. This trend indicates that data protection agencies are increasingly prioritizing patron privacy over AI advancements powered by user data.
In contrast, the situation in the US is different. Meta’s updated data collection policies are already in effect there, as the US lacks comprehensive patron privacy laws comparable to those in the EU and now Brazil.
Human Rights Concerns: Preventing Deepfakes and Exploitation
The ANPD’s decision aligns with concerns highlighted in a Human Rights Watch report published last month. The report found that LAION-5B, a large image-caption dataset used globally to train AI models, contains identifiable photos of Brazilian children. This raises significant concerns about the possibility of misuse of such data for malicious activities, such as creating deepfakes or exploiting children.
The conflict between Meta and data protection agencies in Brazil and the EU underscores the complex challenge of balancing AI innovation with user privacy protection. As AI development progresses, regulatory bodies worldwide will likely face similar issues.
A solution fostering responsible AI development while protecting user privacy is crucial. This could involve stricter data anonymization practices, developing clearer and more user-friendly opt-out mechanisms, or establishing comprehensive legal frameworks for personal data use in AI training.
Brazil’s choice to prohibit Meta from using patron of data for AI training marks a significant step toward prioritizing user privacy in the digital age. The response of other countries and tech companies to this development will play a crucial role in shaping the future of AI and its societal impact.