Ireland’s Data Protection Commission has confirmed that Google’s AI model is facing EU scrutiny as part of a broader investigation into privacy practices. European Union privacy regulators have launched an inquiry into whether Google adequately protected user data before employing it to develop an AI model. The investigation, led by Ireland’s Data Protection Commission (DPC), targets Google’s Pathways Language Model 2 (PaLM 2).
The DPC, responsible for overseeing the privacy compliance of numerous US tech firms operating in the EU, said this inquiry is part of a broader effort to regulate how personal data is processed in the development of AI systems. The probe will assess if Google properly evaluated the risks to individual rights and freedoms before processing data for PaLM 2.
PaLM 2 at the Center of the Inquiry
Google’s AI model is facing EU scrutiny due to concerns over data privacy compliance with GDPR. PaLM 2, Google’s next-generation language model, is a critical tool for the company’s AI-powered services. It powers various generative AI applications, including email summarizing. The DPC is now examining if the data used to train PaLM 2 complied with the EU’s stringent General Data Protection Regulation (GDPR).
Google has positioned PaLM 2 as a major advancement in AI, with improved capabilities in coding, multilingual tasks, reasoning, and classification. The company claims to have implemented responsible AI practices, but EU regulators remain concerned about how personal data is handled in the training of large language models like PaLM 2.
Other Tech Giants Under Fire
Google isn’t the only tech giant facing EU scrutiny over data privacy in AI development. Last month, X (formerly Twitter) agreed to stop processing EU users’ data for its AI chatbot, Grok, following legal action by the DPC. The watchdog had filed an urgent High Court application, citing concerns over the unauthorized use of public posts in training the AI model.
Meta Platforms, the parent company of Facebook, also came under pressure from Irish regulators earlier this year. The company paused plans to use content posted by European users to train its AI systems after “intensive engagement” with regulators. The decision was made amid fears of violating GDPR, and Meta has since adjusted its privacy policy to restrict how user data is used in AI training.
Italy’s Previous Action Against ChatGPT
Due to concerns about data protection, Google’s AI model is facing EU scrutiny for its use of personal information. Previously, in response to privacy concerns, AI companies like OpenAI have worked on refining their AI models to ensure they respect user data. In 2023, Italy’s data privacy regulator temporarily banned ChatGPT over similar concerns regarding data privacy. The ban was lifted only after OpenAI, the creator of ChatGPT, met specific demands to address the regulator’s issues.
The development of AI models, especially large language models like PaLM 2, has raised significant concerns among privacy advocates and regulators. As AI technologies evolve, companies like OpenAI must navigate complex privacy regulations in the EU.
Why is PaLM 2 Unique?
PaLM 2, Google’s advanced language model, stands out in tasks such as reasoning, translation, and code generation. It is an improvement over its predecessor, PaLM, incorporating three key advancements in large language models.
- First, PaLM 2 employs compute-optimal scaling, a technique that scales the model size and training dataset proportionally. This makes PaLM 2 smaller but more efficient, delivering faster inference, better performance, and lower costs compared to PaLM.
- Second, PaLM 2 uses an improved dataset mixture. While earlier models like PaLM relied mostly on English text, PaLM 2’s pre-training corpus is more diverse, covering multiple human and programming languages, scientific papers, mathematical equations, and web pages.
- Lastly, the model architecture has been updated. PaLM 2 was trained on a wide variety of tasks, enabling it to learn different aspects of language more effectively.
Also Read: Swift’s False Endorsement of Trump Conjured Up Her Fears of Digital Manipulation.