The European Union has implemented the world’s first major AI law to regulate artificial intelligence. This landmark legislation, aimed at regulating AI development and use, received final approval in May from EU member states, lawmakers, and the European Commission.
The AI Act is the first comprehensive legal framework for artificial intelligence in the world. Proposed by the European Commission in 2020, the law addresses the potential negative impacts of AI technology. It establishes a harmonized regulatory framework across the EU, primarily targeting large American tech companies that are major players in AI development. However, the rules also apply to non-tech firms that use AI systems.
Tanguy Van Overstraeten, head of the technology, media, and technology practice at law firm Linklaters in Brussels, noted the broad impact of the AI Act. It affects businesses involved in developing, deploying, or using AI systems in specific circumstances.
Risk-Based Regulation
The AI Act employs a risk-based approach, regulating AI applications according to their potential risks to society. High-risk AI systems, such as autonomous vehicles, medical devices, loan decisioning systems, and biometric identification systems, face strict requirements. These include comprehensive risk assessments, high-quality training datasets to prevent bias, detailed documentation, and mandatory compliance checks.
Some AI applications, considered “unacceptable,” are banned outright. This includes social scoring systems, predictive policing, and emotional recognition technology used in sensitive environments like workplaces or schools.
Impact on U.S. Tech Firms
Major U.S. technology companies, including Microsoft, Google, Amazon, Apple, and Meta, are significantly affected by the AI Act. These firms have invested heavily in AI technology and infrastructure, particularly in cloud platforms essential for training and running AI models. The new rules bring increased scrutiny to their operations in the EU, especially regarding the handling of EU citizen data. American tech giants like Microsoft and Google are significantly affected by the world’s first major AI law.
Meta has already restricted access to its AI models in Europe, citing regulatory concerns. This move highlights the broader implications of EU regulations, such as the General Data Protection Regulation (GDPR), on global tech companies.
Generative AI and Open-Source Models
The AI Act specifically addresses “general-purpose” AI, including generative AI models like OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude. These systems must comply with EU copyright laws, provide transparency on model training, and maintain robust cybersecurity measures.
Open-source AI models, which are freely available to the public, have some exceptions under the AI Act. However, these models must make their parameters, including weights and architecture, publicly available. They must also enable access, use, modification, and distribution of the model. High-risk open-source models are not exempt from the rules.
Penalties for Non-Compliance
The AI Act, known as the world’s first major AI law, targets high-risk AI systems to ensure public safety. Companies that violate the AI Act face significant fines, ranging from €35 million or 7% of global annual revenue, whichever is higher, to €7.5 million or 1.5% of global annual revenue. The severity of fines depends on the nature of the breach and the company’s size. These penalties surpass those under the GDPR, which can impose fines of up to €20 million or 4% of annual global turnover.
The European AI Office, established by the European Commission in February 2024, will oversee compliance with the AI Act. Jamil Jiva, global head of asset management at fintech firm Linedata, emphasized the EU’s intention to enforce these regulations with significant fines, mirroring the global impact of GDPR on data privacy.
Although the AI Act is now in force, most of its provisions will not take effect until at least 2026. Restrictions on general-purpose AI systems will begin 12 months after the law’s implementation. Existing generative AI systems, like ChatGPT and Google’s Gemini, have a 36-month transition period to comply with the new regulations.
Also Read: Breaking News: OpenAI Starts the Roll-Out of Advanced Voice Mode to Some ChatGPT Plus Users.