The European Union Artificial Intelligence Act came into effect on August 1, 2024.
This Act establishes comprehensive obligations for regulating and overseeing AI systems by classifying them into various risk levels.
These obligations aim to ensure that AI systems are used safely, fairly, and transparently while implementing oversight mechanisms to prevent harm to human rights and fundamental freedoms. The risk levels are categorized into minimal risk, limited risk, high risk, and unacceptable risk, as discussed in more detail below.
Additionally, the European Union Artificial Intelligence Act promotes international cooperation and alignment with global standards by considering the GPAI (“General Purpose on Artificial Intelligence”) model. GPAI is an AI system designed to serve a wide range of purposes, offering capabilities for both direct use and integration with other AI systems. The Act imposes certain obligations on GPAI providers, similar to those on high-risk AI providers (Article 53).
The implementation of the Act will be carried out progressively to allow organizations sufficient time to comply with its provisions. In this context:
- From February 2, 2025, Member States will gradually phase out AI systems posing an unacceptable risk.
- From August 2, 2025, all obligations for GPAI will be implemented.
- From August 2, 2026, all obligations for high-risk AI systems listed in Annex 3 will be implemented.
- From August 2, 2027, all obligations for high-risk AI systems listed in Annex 1 will be implemented.
1) AI Systems Posing Unacceptable Risk
The Act prohibits AI systems that pose an “Unacceptable Risk.” These systems exhibit features such as:
- Systems that use subliminal, manipulative, or deceptive techniques to impair behavior and hinder conscious decision-making, causing significant harm.
- Systems that exploit vulnerabilities related to age, disability, or socioeconomic status, disrupting behavior and causing significant harm.
- Social scoring systems that evaluate or classify individuals or groups based on social behavior or personal characteristics, leading to harmful or adverse treatment.
- Systems that assess an individual’s risk of committing a crime solely based on personality traits.
- Systems that create facial recognition databases by indiscriminately scanning facial images from the internet or security camera footage.
- Systems that fail to detect emotions in workplaces or educational institutions.
- Biometric categorization systems that extract sensitive attributes (e.g., race, political opinion, trade union membership, religious or philosophical beliefs, sexual life or orientation).
- Remote Biometric Identification systems for law enforcement in public spaces (with certain exceptions). Exceptions include searching for missing persons, abducted children, and victims of human trafficking or sexual abuse; preventing specific, significant, and imminent threats to life or physical safety; and identifying suspects in serious crimes.
2) AI Systems Posing High Risk
High-risk AI systems are the primary focus of the Act.
Most obligations enacted with the Act apply to providers-developers of high-risk AI systems.
Providers-developers are organizations or individuals typically responsible for developing the software and hardware, monitoring performance and safety, organizing user training, and ensuring legal compliance. Providers are often defined as the commercial manufacturers or hosts of the AI system, while developers are engineers, programmers, and researchers responsible for the technical aspects of the system.
Providers-developers intending to market or deploy high-risk AI systems in the EU, regardless of their location within or outside the EU, are subject to the Act. Additionally, third-country providers using the output of high-risk AI systems within the EU are also liable.
The obligations imposed on high-risk AI providers-developers include: (Articles 8-17)
- Establishing a risk management system throughout the lifecycle of high-risk AI systems.
- Implementing data governance to ensure that training, validation, and testing datasets are relevant, adequately representative, as error-free as possible, and suitable for the intended purpose.
- Preparing technical documentation to demonstrate compliance and provide information to authorities for compliance assessment.
- Designing high-risk AI systems to automatically log relevant incidents to identify national-level risks and significant changes throughout the system’s lifecycle.
- Providing instructions for use to sub-level distributors to ensure compliance.
- Designing high-risk AI systems to allow distributors to implement human oversight.
- Designing high-risk AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity.
- Establishing a quality management system to ensure compliance.
3) AI Systems Posing Limited Risk
Limited risk AI systems require some Act but do not fall under the high-risk category. These systems may be subject to specific transparency and monitoring requirements, such as drawing users’ attention and providing appropriate warnings. The Act emphasizes the need for transparency regarding limited-risk AI systems, stating that end-users should be informed when interacting with AI.
4) AI Systems Posing Minimal Risk
Minimal risk AI systems, such as AI-powered video games and spam filters, fall into the category of systems generally allowed for use with minimal Act. These systems can be used without specific restrictions but must adhere to basic obligations like compliance with legal, transparency, and security standards.