Understanding The European Union (EU) Artificial Intelligence (AI) ACT: Key Requirements and Provisions
Whether you are a CEO, product manager, or an organization intending to build or adopt AI into your product or business processes, regulations should undoubtedly form part of your considerations. The reason is not far-fetched; laws define whatever is prohibited and otherwise, and you shouldn't waste them building what is prohibited not necessarily because of the risk of criminal offenses but because of public acceptance.
Unlike in the past, when our understanding of AI and robots was limited to science fiction, today, AI has become deeply ingrained in our world, and disregarding its capabilities could be like dancing a tango on a cliff. Come to think of it, where is AI not being mentioned or discussed today? AI now forms part of key conversations in every organization, from the Fortune 500 to mid-sized organizations, start-ups, and even organizations focused on human management are caught by the wave. Everyone is pressing toward adoption and finding a market fit using AI.
However, in all of these waves of enthusiasm and desire to tap into this system's possibilities, regulations are required to serve as a guide because, as humans, our natural desires are diverse and endless, and our thought processes and biases differ. For instance, Precision AI, an excellent tool for healthcare and marketing, is a dangerous weapon for military and war weapons. Also, by regulations, we must find answers to innovative boundaries so we do not invent to destroy but rather improve.
So, why should you read this content? This article is the most critical content on governance attempts in AI because it offers a comprehensive guide and answers pressing questions about the EU AI Act and its effect on individuals and businesses.
On July 12, 2024, the EU Artificial Intelligence Act (AI Act) was finally adopted and published in the EU's Official Journal, marking the conclusion of a long elaboration and adoption process. The EU AI Act came into force on August 1, 2024, and the first of its requirements will kickstart on February 2, 2025. The Act aims to harmonize AI rules across member states while addressing ethical and safety concerns.
In this article, we will be discussing the following:
- What is the EU AI Act, and Why Does It Matter?
- When does the EU AI Act come into force?
- What entities or organizations are affected by the EU AI Act?
- Who is not affected by the EU AI Act?
- How does the EU AI Act classify AI systems? / What are the risk categories defined by the EU AI Act?
- What are high-risk AI systems under the EU AI Act?
- What are prohibited AI practices in the EU AI Act?
- How does the EU AI Act address AI ethics and human rights?
- What are the compliance requirements for businesses under the EU AI Act?
- What penalties exist for non-compliance with the EU AI Act?
- How does the EU AI Act compare to GDPR?
- What is the impact of the EU AI Act on AI innovation?
- What are the key provisions of the EU AI Act?
- Does the AI Act only affect EU businesses?
- What is next for companies?
What is the EU AI Act, and Why Does It Matter?
Before discussing the specifics of the regulation, it's crucial to clearly understand what the Act defines as Artificial Intelligence (AI). "‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This definition is crucial because it allows us to know the technologies addressed.
Whilst this definition seems broad, the objective is nonetheless to distinguish AI systems from more straightforward traditional software or programming approaches and to ensure the definition doesn’t, for instance, capture pure automation.
Amongst other things, the key characteristics include:
(a) The capability to infer how to generate outputs that can influence physical or virtual environments.
(b) Runs on machines.
(c) Having varying degrees of autonomy from human involvement and capabilities to operate without human intervention.
Like the EU's General Data Protection Regulation (GDPR), which caused global data privacy and protection changes, experts think the EU AI Act will set standards for AI governance, affecting ethics and standards worldwide. The AI Act, widely recognized as the first comprehensive legal framework for AI governance, prohibits specific AI applications while requiring others to adhere to strict risk management, transparency, and accountability criteria. It also introduces governance for general-purpose AI models.
The regulation aims to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI) while ensuring high-level protection of health, safety, and fundamental rights enshrined in the Charter, including democracy, the rule of law, and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.
Why is AI being regulated?
As much as AI has enormous potential, it bears significant associated risks. These risks range from high-concern issues, such as deepfakes, driverless vehicles used as weapons, tailored phishing, disruptions to AI-controlled systems, large-scale blackmail, and AI-authored fake news, to moderate concern risks, like the misuse of military robots, snake oil scams, data poisoning, learning-based cyberattacks, autonomous attack drones, denial of access to online activities, tricking facial recognition systems, and manipulating financial or stock markets. Even at the low concern level, risks like burglar bots, evading AI detection, AI-authored fake reviews, AI-assisted stalking, and forgery of creative content such as art or music remain significant.
These risks include broader issues like bias, cyber threats, data privacy concerns, environmental harms, existential risks, job displacement, surveillance, content moderation challenges, intellectual property infringement, lack of accountability and transparency, and potential misuse in critical areas like health and autonomous weapons. Considering AI's disruptive impact on society and the need to establish trust, the EU AI Act is a significant matter today.
When does the EU AI Act come into force?
Implementing the AI Act in the EU will be gradual, with transition periods for various requirements.
The key dates are:
- February 2, 2025: The provision of the Act that addresses prohibited AI, AI literacy initiatives, and general provisions will take effect.
- August 2, 2025: The rules for general-purpose AI and penalties will take effect for all new GPAI models. Providers of GPAI models placed on the market before 2 August 2025 will have until 2 August 2027 to comply.
- August 2, 2026: High-risk AI systems (e.g., credit scoring, health insurance risk assessment), transparency requirements, and the use of regulatory sandboxes will come into force.
- August 2, 2027: High-risk AI in specific sectors (e.g., medical devices, toys, machinery) and existing general-purpose AI models must comply.
These staggered timelines aim to balance regulatory enforcement with practical adaptation for stakeholders.
What entities or organizations are affected by the EU AI Act?
As provided under Article 2(1) of the Act, the regulation applies to the following persons, entities, or organizations:
(a) Providers placing on the market or putting into service AI systems or general-purpose AI models in the EU, regardless of their location within the Union or a third country.
(b) Deployers of AI systems established or located within the EU.
(c) Providers and deployers of AI systems based in third countries if their AI system outputs are used in the EU.
(d) Importers and distributors of AI systems.
(e) Product manufacturers placing AI systems on the market or in service with their product under their name or trademark.
(f) Authorized representatives of providers not established in the EU.
(g) Affected persons located within the EU.
Who is not affected by the EU AI Act?
As provided under Article 2(3-12), the EU AI Act does not apply to the following persons, entities, or activities:
(a) Entities conducting activities outside the scope of Union law or concerning national security tasks, regardless of the entity type.
(b) AI systems developed, placed on the market, or used exclusively for military, defense, or national security purposes.
(c) AI systems are used exclusively for military, defense, or national security purposes, even if their output is used in the Union.
(d) Public authorities from third countries or international organizations engaged in international law enforcement or judicial cooperation with the EU or Member States, provided adequate safeguards are in place.
(e) Scientific research and development activities exclusively intended for research purposes.
(f) Activities related to testing or developing AI systems before they are placed on the market, except real-world condition testing.
(g) Obligations of deployers using AI systems in purely personal and non-professional activities.
(h) AI systems released under free and open-source licenses unless they are high-risk or fall under specific provisions like Article 5 or 50.
(i) Other Union legal acts covering consumer protection and product safety.
(j) Laws or agreements that provide more favorable protections to workers, ensuring safeguarding their rights when employers use AI.
How does the EU AI Act classify AI systems? / What are the risk categories defined by the EU AI Act?
The Act introduced a risk-based framework to categorize AI systems into four tiers, and they are:
- Prohibited AI Practices
- High-Risk AI systems
- General-Purpose AI
- Minimal or No-Risk
These risk classifications are primarily based on fundamental human rights, including privacy, non-discrimination, social justice, and law, not just enterprise needs.
What are prohibited AI practices in the EU AI Act?
The EU AI Act, under Article 5, explicitly lists certain prohibited AI practices that are deemed to pose an unacceptable level of risk. The following AI practices are not permitted:
(a) Deploying AI systems that utilize subliminal techniques beyond human consciousness to manipulate behavior, causing significant harm or impairing informed decision-making.
(b) Exploiting vulnerabilities of individuals or groups due to age, disability, or social and economic circumstances, resulting in significant harm.
(c) Using AI for social scoring based on personal characteristics, leading to unjustified or disproportionate treatment.
(d) Assessing or predicting the likelihood of criminal offenses solely based on personality traits or profiling.
(e) Creating or expanding facial recognition databases through untargeted scraping of images from the internet or CCTV footage.
(f) Inferring emotions in workplace or educational settings, except for medical or safety purposes.
Recommended by LinkedIn
(g) Biometric categorization of individuals to deduce sensitive attributes such as race, religion, or sexual orientation unless legally justified.
(h) Using real-time remote biometric identification systems in public spaces for law enforcement, except under strictly defined circumstances such as finding missing persons or preventing terrorist threats.
What are high-risk AI systems under the EU AI Act?
Under Article 6, AI systems are considered high-risk under the EU AI Act if they are a product or safety component of a product regulated under specific EU laws referenced by the act. The following are classified as high-risk AI systems:
(a) AI systems are intended as safety components of products or standalone products covered by Union harmonization legislation and require third-party conformity assessments before market placement or service.
(b) AI systems listed in Annex III include applications in biometric identification, critical infrastructure, healthcare, education, and law enforcement, mainly if they involve profiling natural persons.
(c) AI systems are designed to perform procedural or preparatory tasks or enhance human decision-making, provided they meet specific conditions outlined in Annex III.
(d) High-risk AI systems are used to profile natural persons, irrespective of their intended use, due to their potential impact on health, safety, or fundamental rights.
What are the compliance requirements for businesses under the EU AI Act?
This paragraph will address compliance requirements for high-risk AI and general-purpose AI.
What are the compliance requirements for high-risk AI under the EU AI Act?
- Organizations shall establish, implement, document, and maintain a risk management system for high-risk AI systems throughout the lifecycle of high-risk AI systems.
- Organizations should prepare technical documentation before the system is placed on the market. This documentation must be updated and include all required elements outlined in Annex IV.
- Organizations should maintain detailed and automated records (logs) of system operations to ensure traceability and facilitate monitoring and compliance checks over the life cycle of the AI system.
- Organizations must design high-risk AI systems with mechanisms for effective human oversight, enabling operators to monitor and intervene as needed.
- Organizations must ensure that the design, development, and operation of AI systems are transparent and that human operators understand the system’s capabilities and limitations, avoid over-reliance, and override the system when necessary.
- Organizations should implement training and guidance for deployers to interpret AI outputs correctly and operate the system safely. The system must be subject to data governance and management and used for its intended purpose.
- Organizations should also use high-quality data for training, validation, and testing, ensuring it is representative, error-free, and adheres to the AI system's intended purpose.
- Organizations should identify and mitigate biases in data that could impact safety or fundamental rights or lead to discrimination.
- Organizations should ensure that high-risk AI systems achieve appropriate accuracy, robustness, and cybersecurity levels throughout their lifecycle.
- Organizations should implement measures to prevent, detect, and respond to threats like data poisoning and model manipulation.
What is General-Purpose AI under the EU AI Act?
Under Article 51, general-purpose AI models (GPAI) with systemic risk are classified based on the following criteria:
- (a) GPAI models with high-impact capabilities as determined by technical tools, indicators, and benchmarks.
- (b) GPAI models identified by the European Commission as having capabilities or impacts equivalent to those outlined in point (a), based on criteria in Annex XIII.
Additionally, GPAI models are presumed to have high-impact capabilities if the computation used for training exceeds a specific threshold. The Commission is empowered to adjust these thresholds and criteria to reflect technological advancements and ensure alignment with the state of the art.
What are the compliance requirements for general-purpose AI (GPAI) under the EU AI Act?
- GPAI providers must maintain updated technical documentation of the AI model, including training, testing, and evaluation details.
- Providers should provide detailed information and documentation to providers integrating the GPAI model into their systems, including its capabilities and limitations.
- GPAI providers are to publish a summary of the training content used for the AI model.
- Providers must ensure the technical documentation is available to the AI Office and national authorities for ten years.
- Providers should implement a copyright and related rights compliance policy, including mechanisms for identifying reserved rights.
- Providers should evaluate and mitigate systemic risks associated with the GPAI model through adversarial testing and other protocols.
- GPAI providers must ensure cybersecurity protection for GPAI models with systemic risks, covering digital and physical infrastructure.
- GPAI providers must monitor, document, and immediately report serious incidents and corrective actions to the AI Office.
- GPAI providers should cooperate with the Commission and national authorities to demonstrate compliance, including reliance on codes of practice or harmonized standards where applicable.
- Providers based outside the EU should appoint an authorized representative within the Union to act on their behalf and ensure regulatory compliance.
What is minimal-risk AI as defined by the EU AI Act?
AI is considered minimal or low risk when there are apparent exceptions as to modeling and usage. These are permitted under the act without direct instructions, but organizations may create a voluntary code of conduct.
What penalties or fines are there for non-compliance with the EU AI Act?
Penalties are structured as follows:
- Prohibited AI Practices: Organizations that engage in prohibited activities such as social scoring or manipulative AI techniques face fines of up to EUR 35 million or 7% of their global annual turnover, whichever is higher.
- High-Risk AI and Transparency Violations: Noncompliance with requirements for high-risk AI systems or specific transparency obligations can result in fines of up to EUR 15 million or 3% of global annual turnover, whichever is higher.
- General-Purpose AI Models (GPAI): Providers of GPAI models that fail to meet their obligations may incur fines of up to EUR 15 million or 3% of global annual turnover, whichever is higher.
- Misleading Information: Supplying incorrect, incomplete, or misleading information to authorities may lead to fines of up to EUR 7.5 million or 1% of global annual turnover, whichever is higher.
Note that Small and medium-sized enterprises (SMEs), including start-ups, are subject to special provisions. For these businesses, the fine is the lower of the two possible amounts specified above.
EU Member States are predominantly responsible for establishing rules on enforcement and penalties, even though the regulation provides penalty guidelines. The European Commission only oversees compliance for GPAI models. It is important to note that sanctions may be assessed based on the turnover of the entire group (or "undertaking") rather than the noncompliant entity alone, which could have substantial financial repercussions.
How does the EU AI Act compare to GDPR?
GDPR is primarily about protecting personal data. The AI Act, on the other hand, is about more considerable ethics and safety issues with AI systems. Both work to protect fundamental rights, but they do so in different areas.
What is the impact of the EU AI Act on AI innovation?
The EU AI Act promotes new ideas by providing regulatory sandboxes and clear guidelines. These allow businesses to ideate and create ethical AI solutions in safe settings. Although, concerns have been raised about how strict rules might stop novel creations. However, the Act's focus on openness, responsibility, and lowering risks is meant to build public trust, which is necessary for AI to be widely used and for the industry to grow in a lasting way.
How does the EU AI Act address AI ethics and human rights?
The EU AI Act protects fundamental rights like privacy, non-discrimination, and dignity by enacting standards of fairness, accountability, and transparency. For instance, the Fundamental Rights Impact Assessments (FRIAs) for specific high-risk AI systems used for public services or to check credit scores and insurance risks. The regulation provides that before deploying these systems, deployers must carefully scrutinize the harm, know the groups affected, and know how the damage can be maximized through data governance and complaint processes.
What are the key provisions of the EU AI Act?
All the abovementioned provisions are key to the act that individuals and businesses must adhere to. However, essentially, the key take-out parts of the EU AI Act are:
a) Protection of Fundamental Rights: The Act protects privacy, equality, and the right not to be monitored or be a victim of surveillance. It also doesn't allow harmful AI practices like biometric recognition systems unless where necessary.
b) Transparency Requirements: Developers must provide clear documents on how AI systems are developed, how they work, how decisions are made, and any possible risks to ensure they are held accountable and the public trusts them.
c)Promotes Innovation: Regulatory sandboxes allow researchers and startups to test and iterate AI in safe environments, encouraging creativity while ensuring that regulations are complied with.
d) Ensuring Corporate Accountability: Companies using high-risk AI must conduct risk assessments and keep thorough records.
Does the AI Act only affect EU businesses?
The EU AI Act applies globally, affecting both EU-based and non-EU-based businesses.
- EU-based businesses: This includes providers, deployers, authorized representatives, and importers that place AI systems or general-purpose AI (GPAI) models on the EU market or put them into service.
- Non-EU-based businesses: Covers third-country providers and deployers if their AI systems or outputs are used within the EU.
What is next for companies?
The EU AI Act is a turning point for companies globally because it changes how AI is created, used, and regulated. To make the most of these changes, organizations must ensure that individuals within or outside the organization know them and plan how to apply them.
At this point, businesses should focus on compliance as the rule goes into effect by asking essential questions, evaluating risks, putting in place necessary frameworks, and monitoring market developments.
Conclusion
After a critical review of the regulation, I am left to ask many questions: As much as the EU’s AI Act is a bold step toward shaping a future where AI serves humanity without compromising rights, safety, or values, with this regulation, what is left to be invented? To what extent should AI be regulated? Are these fines justifiable? To what extent can laws be enforced? What is the visibility of enforcement? Are regulations helpful or denial of a disruption? The questions are never-ending.
Like I have, stakeholders have expressed appreciation and criticism of the EU AI Act because it changes how AI works globally. While its framework protects fundamental rights, ensures accountability, and is open and honest, critics worry about how too much regulation could slow down innovation, how unclear definitions could make enforcement harder or unnecessarily increase the scope of enforcement, and how it might make European businesses less competitive in a global market. Interestingly, some mentioned that the Act doesn't do enough to protect human rights, especially issues like the wrong use of face recognition technology. Really! So, what else would you like to have been considered?
However, even with these problems, the EU AI Act is a big step toward ensuring AI is used safely and decently. Now, businesses need to adapt speedily, follow the rules, and take advantage of the opportunities to create reliable, new AI solutions that align with values that put people first.
Now that the rules are in place, change is not only necessary; it is a must.
I hope you found value in the article.
Until I write again.
Isaac Ijuo.
Resources Consulted
- This intellectual piece is meant to give you information; it should not be considered legal advice, and it is not a replacement for talking to a lawyer or getting advice relevant to your case. If readers need help with the EU AI Act or any other legal issue, they are urged to get legal advice from qualified professionals. The author is not responsible for liability incurred after reading this article
- Isaac Ijuo, Attorney and AI Researcher.
- European Union, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 12 July 2024 laying down harmonised rules on artificial intelligence (AI Act) and amending certain Union legislative acts. Official Journal of the European Union, L 189, 12 July 2024, pp. 1-144. Available at: http://data.europa.eu/eli/reg/2024/1689/oj.
- Kosinski, M., & Scapicchio, M. (2024, September 20). What is the Artificial Intelligence Act of the European Union (EU AI Act)? IBM Blog https://www.ibm.com/think/topics/eu-ai-act accessed 8th January 2025.
- Caballar, R. (2024, September 3). 10 AI dangers and risks and how to manage them. IBM Blog. https://www.ibm.com/blog/10-ai-dangers-and-risks-and-how-to-manage-them/ accessed 9th January 2025.
- Leprince-Ringuet, D. (n.d.). Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create. ZDNet. https://www.zdnet.com/article/evil-ai-these-are-the-20-most-dangerous-crimes-that-artificial-intelligence-will-create/ accessed 9th January 2025.
Law | innovation and technology law| data privacy & protection | Company secretarial services | dispute resolution |digital assets | real estate documentation and title perfection| IPL | business associations formation|
9moThis was a detailed and comprehensive article on the AI Act. Thanks for putting it together.
Empowering Kenyan Lawyers through AI & Legal Tech | Legal Technologist | Lawyer | Advocate In Training
10moThe EU AI Act undoubtedly sets a significant precedent for the global conversation on AI ethics and safety. Your insights on its implications for businesses and innovation are crucial for understanding how we navigate this evolving landscape. I appreciate the detailed approach you've taken in breaking down the key aspects of the Act. It raises important considerations for anyone involved in AI, regardless of geography. Thank you for shedding light on these critical topics.