A lot of attention has been paid to artificial intelligence (AI) technology because of worries about potential dangers and misuse. With the adoption of the AI Act, the European Union (EU) has made tremendous progress in addressing these worries. This ground-breaking legislation intends to control AI applications and systems, ensuring that they are created and utilised responsibly. The main points of the EU’s AI Act are examined in this article, along with the risk categories it defines and any potential effects it might have on businesses engaged in AI development.
Credits: Gates Notes
Understanding the AI Act:
The EU’s AI Act, the first law of its sort in the West, marks a crucial turning point in the regulation of artificial intelligence. It uses a risk-based strategy to manage the creation and application of generative AI technology. According to the act, there are four categories of risk for AI applications: unacceptable danger, high risk, limited risk, and low or no risk.
Unacceptable Risk Applications:
The AI Act’s inclusion of a category for applications with “unacceptable risk” is one of its standout features. The European Union forbids the use of certain AI systems because they present serious risks.The act defines several types of applications falling under this category, including:
- Subliminal or manipulative techniques: AI systems that exploit deceptive techniques to distort behavior, potentially leading to harm or manipulation of individuals or groups.
- Exploitation of vulnerabilities: AI systems that take advantage of the vulnerabilities of individuals or specific groups, potentially leading to discrimination or harm.
- Biometric categorization systems based on sensitive attributes: AI systems that categorize individuals based on sensitive attributes or characteristics, raising concerns regarding privacy and discrimination.
- Social scoring and trustworthiness evaluation: AI systems used for social scoring or assessing trustworthiness, potentially infringing upon individual privacy and leading to unfair treatment.
- Risk assessments predicting offenses: AI systems used for risk assessments that predict criminal or administrative offenses, raising concerns about accuracy, fairness, and potential bias.
- Facial recognition databases through untargeted scraping: AI systems that create or expand facial recognition databases through untargeted scraping, raising privacy concerns and potential misuse of personal data.
- Emotion inference in sensitive contexts: AI systems inferring emotions in law enforcement, border management, the workplace, and education, raising ethical and privacy concerns.
Impact on Companies:
Companies engaged in AI development will likely be impacted by the EU’s AI Act, particularly those working with foundation models like ChatGPT or high-risk applications. Before making their technology available to the general public, AI model developers must abide by governance guidelines, risk reductions, and safety checks. They must also make sure that the training data used to create these models conforms with copyright regulations.
- Compliance and Responsibility: To be in compliance with the AI Act, businesses will need to invest in strong governance practises and safety measures. To do this, it will be necessary to put ethical principles into practise, to maintain transparency, and to address any potential biases or discriminatory effects.
- Research and Development: Companies will probably be encouraged to prioritise ethical research and development practises by the AI Act’s emphasis on risk categories. To successfully navigate the regulatory landscape, they will need to make investments in thorough risk assessments and mitigation plans.
- Innovation and Competition: The AI Act enforces harsher controls, but it could also encourage advancements in AI technology. Companies who can successfully balance innovation and ethical AI development will probably have an advantage over rivals in the market.
The EU AI Act is an important step towards risk-based regulation of AI technologies. The act intends to secure the appropriate development and use of AI systems in the European Union by classifying applications into various risk levels, including a “unacceptable risk” category. Businesses engaged in the creation of AI will need to adjust to the new regulatory environment, giving compliance, accountability, and ethical considerations top priority. The AI Act poses difficulties, but it also offers a chance for creativity and the promotion of safer AI applications for the good of society as a whole.