Meta has made a significant announcement regarding its upcoming multimodal AI model, opting not to release it in the European Union due to regulatory uncertainties. This decision means that despite the model being available under an open license, European companies will not have access to it.
Regulatory Environment in the EU
Speaking on behalf of Meta, spokesperson Kate McLaughlin explained that while the multimodal Llama model is set for release in the near future, it will exclude the EU market. The primary concern cited is the unpredictable regulatory environment. Recently, the EU finalized deadlines for AI companies to comply with its stringent new AI Act by August 2026, covering areas such as copyright, transparency, and the use of AI in applications like predictive policing.
Meta’s move echoes a similar decision by Apple, which also expressed reservations about launching its Apple Intelligence platform in the EU due to concerns over the Digital Markets Act. Moreover, Meta has paused plans to introduce its AI assistant in the EU and halted generative AI tools in Brazil due to issues related to data protection compliance.
Impact on European and Global Markets
The exclusion of Meta’s multimodal model from the EU market carries substantial implications for both European and global companies. European firms will be unable to leverage these advanced AI capabilities, limiting their capacity to innovate and offer new services. Likewise, non-EU companies aiming to operate in Europe will face hurdles due to the absence of these technologies.
Despite this setback, Meta intends to proceed with releasing a larger, text-only version of its Llama 3 model, tailored for EU customers. This decision underscores Meta’s commitment to continuing its AI initiatives, including integration into products like the Meta Ray-Ban smart glasses. Nonetheless, the absence of the full multimodal model poses challenges for companies reliant on comprehensive AI solutions for product development.
GDPR Compliance Issues
Meta’s decision is primarily driven not by the new AI Act but by the complexities associated with adhering to the General Data Protection Regulation (GDPR). In May, Meta announced plans to utilize publicly available posts from Facebook and Instagram to train future models. Despite proactive engagement with EU regulators and addressing initial feedback, Meta was instructed to halt training on EU data in June, subsequently receiving numerous queries from data privacy regulators across the region.
Interestingly, Meta faces fewer regulatory uncertainties in the United Kingdom, which shares similar data protection laws with GDPR. The company plans to launch its new model for UK users without encountering the same regulatory challenges as in the EU. A Meta representative highlighted the prolonged interpretation of existing laws by European regulators compared to counterparts in other regions.
Meta’s decision highlights growing tensions between US-based tech giants and European regulators, known for their stringent privacy and antitrust regulations. Critics argue that these regulations could hinder consumer benefits and diminish the competitiveness of European businesses. Meta emphasizes the importance of training on European data to ensure that its products accurately reflect regional nuances in terminology and culture. Meanwhile, competitors like Google and OpenAI are actively training AI models on European data, intensifying market competition.
As of now, the EU has not publicly responded to Meta’s decision, although similar potential actions by Apple have drawn criticism from EU officials, including Competition Commissioner Margrethe Vestager. The Irish Data Protection Commission, Meta’s primary privacy regulator in Europe, has yet to provide official comments on the matter.