The Artificial Intelligence Act, endorsed by the European Parliament yesterday, sets a global precedent by intertwining AI development with fundamental rights, environmental sustainability, and innovation. Below are the key takeaways: Banned Applications: Certain AI applications would be prohibited due to their potential threat to citizens' rights. These include: Biometric categorization and the untargeted scraping of images for facial recognition databases. Emotion recognition in workplaces and educational institutions. Social scoring and predictive policing based solely on profiling. AI that manipulates behavior or exploits vulnerabilities. Law Enforcement Exemptions: Use of real-time biometric identification (RBI) systems by law enforcement is mostly prohibited, with exceptions under strictly regulated circumstances, such as searching for missing persons or preventing terrorist attacks. Obligations for High-Risk Systems: High-risk AI systems, which could significantly impact health, safety, and fundamental rights, must meet stringent requirements. These include risk assessment, transparency, accuracy, and ensuring human oversight. Transparency Requirements: General-purpose AI systems must adhere to transparency norms, including compliance with EU copyright law and the publication of training data summaries. Innovation and SME Support: The act encourages innovation through regulatory sandboxes and real-world testing environments, particularly benefiting SMEs and start-ups, to foster the development of innovative AI technologies. Next Steps: Pending a final legal review and formal endorsement by the Council, the regulation will become enforceable 20 days post-publication in the official Journal, with phased applicability for different provisions ranging from 6 to 36 months after enforcement. It will be interesting to watch this unfold and the potential impact on other nations as they consider regulation. #aiethics #responsibleai #airegulation https://lnkd.in/e8dh7yPb
EU Strategic Plan for AI Development
Explore top LinkedIn content from expert professionals.
Summary
The “EU Strategic Plan for AI Development” refers to the European Union’s comprehensive framework to develop, regulate, and govern artificial intelligence (AI) technologies while ensuring safety, ethical use, and alignment with fundamental rights. This plan includes the landmark AI Act, which introduces guidelines for managing risks, fostering innovation, and protecting citizens from potentially harmful AI applications.
- Understand the risk-based approach: The AI Act categorizes AI systems into minimal, high, and unacceptable risk levels, with stricter rules for high-risk systems to safeguard public safety and rights.
- Stay compliant with new rules: Organizations deploying AI in the EU must ensure transparency, conduct risk assessments for high-risk systems, and adhere to regulations regarding prohibited applications.
- Prepare for implementation: Businesses should use the transitional period to align with the requirements of the AI Act by adopting voluntary compliance measures and participating in regulatory testing programs.
-
-
The EU Council sets the first rules for AI worldwide, aiming to ensure AI systems in the EU are safe, respect fundamental rights, and align with EU values. It also seeks to foster investment and innovation in AI in Europe. 🔑 Key Points 🤖Described as a historical milestone, this agreement aims to address global challenges in a rapidly evolving technological landscape, balancing innovation and fundamental rights protection. 🤖The AI Act follows a risk-based approach, with stricter regulations for AI systems that pose higher risks. 🤖Key Elements of the Agreement ⭐️Rules for high-risk and general purpose AI systems, including those that could cause systemic risk. ⭐️Revised governance with enforcement powers at the EU level. ⭐️Extended prohibitions list, with allowances for law enforcement to use remote biometric identification under safeguards. ⭐️Requirement for a fundamental rights impact assessment before deploying high-risk AI systems. 🤖The agreement clarifies the AI Act’s scope, including exemptions for military or defense purposes and AI used solely for research or non-professional reasons. 🤖Includes a high-risk classification to protect against serious rights violations or risks, with light obligations for lower-risk AI. 🤖Bans certain AI uses deemed unacceptable in the EU, like cognitive behavioral manipulation and certain biometric categorizations. 🤖Specific provisions allow law enforcement to use AI systems under strict conditions and safeguards. 🤖Special rules for foundation models and high-impact general-purpose AI systems, focusing on transparency and safety. 🤖Establishment of an AI Office within the Commission and an AI Board comprising member states' representatives, along with an advisory forum for stakeholders. 🤖Sets fines based on global annual turnover for violations, with provisions for complaints about non-compliance. 🤖Includes provisions for AI regulatory sandboxes and real-world testing conditions to foster innovation, particularly for smaller companies. 🤖The AI Act will apply two years after its entry into force, with specific exceptions for certain provisions. 🤖Finalizing details, endorsement by member states, and formal adoption by co-legislators are pending. The AI Act represents a significant step in establishing a regulatory framework for AI, emphasizing safety, innovation, and fundamental rights protection within the EU market. #ArtificialIntelligenceAct #EUSafeAI #AIEthics #AIRightsProtection #AIGovernance #RiskBasedAIRegulation #TechPolicy #AIForGood #AISecurity #AIFramework
-
European Union Artificial Intelligence Act(AI Act): Agreement reached between the European Parliament and the Council on the Artificial Intelligence Act (AI Act), proposed by the Commission on December 9, 2023. Entry into force: The provisional agreement provides that the AI Act should apply two years after its entry into force, with some exceptions for specific provisions. The main new elements of the provisional agreement can be summarised as follows: 1) rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems 2) a revised system of governance with some enforcement powers at EU level 3) extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards 4) better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use. The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach approach - Minimal, high, unacceptable, and specific transparency risk Penalties: The fines for violations of the AI act were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act. Next Steps: The political agreement is now subject to formal approval by the European Parliament and the Council. Once the AI Act is adopted, there will be a transitional period before the Regulation becomes applicable. To bridge this time, the Commission will be launching an AI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines. Link to press releases: https://lnkd.in/gXvWQSfv https://lnkd.in/g9cBK7HF #ai #eu #euaiact #artificialintelligence #threats #risks #riskmanagement #aimodels #generativeai #cyberdefense #risklandscape