A very useful Global AI Law comparison from Oliver Patel: "As the global AI race heats up, take stock of the 3 main players. This snapshot focuses on laws which a) apply across the whole jurisdiction and b) apply to companies developing & using AI. Comprehensive AI law 🇪🇺 ✅ AI Act applies across EU 🇨🇳 ❌ National AI law in development 🇺🇸 ❌ No comprehensive federal AI law Narrow AI laws 🇪🇺 ✅ Digital Services Act, Product Liability Directive etc. 🇨🇳 ✅ Deep Synthesis Regulations, Generative AI Services Measures etc. 🇺🇸 ✅ National AI Initiative Act, Removing Barriers to American AI Leadership etc. Regional or local laws 🇪🇺 ❌ AI Act creates harmonised legal regime 🇨🇳 ✅ Regional laws in Shenzhen & Shanghai 🇺🇸 ✅ AI laws in California, Colorado, Utah etc. Technical standards 🇪🇺 ❌ CEN/CENELEC technical standards in development 🇨🇳 ✅ TC260 published standard on generative AI security 🇺🇸 ✅ NIST AI Risk Management Framework Promoting AI innovation 🇪🇺 ✅ AI Act regulatory sandboxes & SME support 🇨🇳 ✅ Strategy to be the global AI leader by 2030 🇺🇸 ✅ New Executive Order strongly prioritises AI innovation Trade and/or export controls 🇪🇺 ✅ Restrictions on export of dual use technology 🇨🇳 ✅ Updated export control regulations restrict AI related exports 🇺🇸 ✅ Restrictions on exports of advanced chips & model weights Prohibited AI 🇪🇺 ✅ AI practices prohibited (e.g., emotional recognition in the workplace) 🇨🇳 ✅ Prohibitions on which AI systems can be used in public facing applications 🇺🇸 ❌ Although various AI uses would be illegal, there are no explicit prohibitions High-risk AI 🇪🇺 ✅ Various AI systems classified as high-risk, including AI used in recruitment 🇨🇳 ✅ Generative AI systems for public use considered high-risk 🇺🇸 ❌ No specific high-risk AI systems in U.S. federal law AI system approval 🇪🇺 ✅ 3rd party conformity assessment required for certain high-risk AI systems 🇨🇳 ✅ Government approval required before public release of LLMs 🇺🇸 ✅ FDA approval required for AI medical devices Development requirements 🇪🇺 ✅ Extensive requirements for high-risk AI system development 🇨🇳 ✅ Detailed requirements for development of public facing generative AI 🇺🇸 ❌ No explicit AI development requirements in U.S. federal law Transparency & disclosure 🇪🇺 ✅ Extensive requirements in AI Act 🇨🇳 ✅ Content labelling required for deepfakes 🇺🇸 ✅ FTC enforces against unfair & deceptive AI use Pubic registration of AI 🇪🇺 ✅ Public database for high-risk AI systems 🇨🇳 ✅ Central algorithm registry for certain AI systems 🇺🇸 ❌ No general requirements to register AI systems AI literacy requirements 🇪🇺 ✅ AI Act requires organisations to implement AI literacy 🇨🇳 ❌ No corporate AI literacy requirements, but schools must teach AI 🇺🇸 ❌ No corporate AI literacy requirements"
How Global Policies Impact AI Innovations
Explore top LinkedIn content from expert professionals.
Summary
Global policies are increasingly shaping the development and application of artificial intelligence (AI) by setting regulations that prioritize ethics, safety, and innovation. These legal frameworks address diverse issues such as data privacy, risk management, transparency, and the balance between fostering technological growth and mitigating societal harm.
- Examine regional regulations: Understand the nuances of AI policies across major markets like the EU, U.S., and China, as they vary in scope, risk classifications, and compliance requirements.
- Consider global implications: Recognize how export controls, trade restrictions, and international standards influence AI development and geopolitics, potentially impacting market access and growth strategies.
- Align with evolving governance: Proactively integrate transparency, safety measures, and ethical considerations within your AI systems to meet current and future policy expectations.
-
-
A new California bill, SB 1047, could introduce restrictions on artificial intelligence, requiring companies to test the safety of AI technologies and making them liable for any serious harm caused by their systems. California is debating SB 1047, a bill that could reshape how AI is developed and regulated. If passed, it would require tech companies to conduct safety tests on powerful AI technologies before release. The bill also allows the state to take legal action if these technologies cause harm, which has sparked concern among major AI companies. Proponents believe the bill will help prevent AI-related disasters, while critics argue it could hinder innovation, particularly for startups and open-source developers. 🛡️ Safety First: SB 1047 mandates AI safety testing before companies release new technologies to prevent potential harm. ⚖️ Legal Consequences: Companies could face lawsuits if their AI systems cause significant damage, adding a new layer of liability. 💻 Tech Industry Pushback: Tech giants like Google, Meta, and OpenAI are concerned that the bill could slow AI innovation and create legal uncertainties. 🔓 Impact on Open Source: The bill might limit open-source AI development, making it harder for smaller companies to compete with tech giants. 🌐 Potential Global Effects: If passed, the bill could set a precedent for AI regulations in other states and countries, influencing the future of AI governance globally. #AI #AIBill #TechRegulation #CaliforniaLaw #ArtificialIntelligence #OpenSource #Innovation #TechPolicy #SB1047 #AIRegulation
-
https://lnkd.in/g5ir6w57 The European Union has adopted the AI Act as its first comprehensive legal framework specifically for AI, effective from July 12, 2024. The Act is designed to ensure the safe and trustworthy deployment of AI across various sectors, including healthcare, by setting harmonized rules for AI systems in the EU market. 1️⃣ Scope and Application: The AI Act applies to all AI system providers and deployers within the EU, including those based outside the EU if their AI outputs are used in the Union. It covers a wide range of AI systems, including general-purpose models and high-risk applications, with specific regulations for each category. 2️⃣ Risk-Based Classification: The Act classifies AI systems based on their risk levels. High-risk AI systems, especially in healthcare, face stringent requirements and oversight, while general-purpose AI models have additional transparency obligations. Prohibited AI practices include manipulative or deceptive uses, though certain medical applications are exempt. 3️⃣ Innovation and Compliance: To support innovation, the AI Act includes provisions like regulatory sandboxes for testing AI systems and exemptions for open-source AI models unless they pose systemic risks. High-risk AI systems must comply with both the AI Act and relevant sector-specific regulations, like the Medical Device Regulation (MDR) and the In Vitro Diagnostic Medical Device Regulation (IVDR). 4️⃣ Global Impact and Challenges: The AI Act may influence global AI regulation by setting high standards, and its implementation within existing sector-specific regulations could create complexities. The evolving nature of AI technology necessitates ongoing updates to the regulatory framework to balance innovation with safety and fairness.
-
The OECD - OCDE published the paper "Assessing potential future AI risks, benefits and policy imperatives,” summarizing insights from surveying its #artificialintelligence’s Expert Group and discussing the top 10 priorities for each category. Priority risks: - Facilitation of increasingly sophisticated malicious #cyber activity - Manipulation, #disinformation, fraud and resulting harms to democracy and social cohesion - Races to develop and deploy #AIsystems cause harms due to a lack of sufficient investment in AI safety and trustworthiness - Unexpected harms result from inadequate methods to align #AI system objectives with human stakeholders’ preferences and values - Power is concentrated in a small number of companies or countries - Minor to serious AI incidents and disasters occur in critical systems - Invasive surveillance and #privacy infringement that undermine human rights - Governance mechanisms and institutions unable to keep up with rapid AI evolution - AI systems lacking sufficient explainability and interpretability erode accountability - Exacerbated inequality or poverty within or between countries. Priority benefits: - Accelerated scientific progress - Better economic growth, productivity gains and living standards - Reduced inequality and poverty - Better approaches to address urgent and complex issues - Better decision-making, sense-making and forecasting through improved analysis of present events and future predictions - Improved information production and distribution, including new forms of #data access and sharing - Better healthcare and education services - Improved job quality, including by assigning dangerous or unfulfilling tasks to AI - Empowered citizens, civil society, and social partners - Improved institutional transparency and governance, instigating monitoring and evaluation. Policy priorities to help achieve desirable AI futures: - Establish clearer rules for AI harms to remove uncertainties and promote adoption - Consider approaches to restrict or prevent certain “red line” AI uses (uses that should not be developed) - require or promote the disclosure of key information about some types of AI systems - Ensure risk management procedures are followed throughout the lifecycle of AI systems - Mitigate competitive race dynamics in AI development and deployment that could limit fair competition and result in harms - Invest in research on AI safety and trustworthiness approaches, including AI alignment, capability evaluations, interpretability, explainability and transparency - Facilitate educational, retraining and reskilling opportunities to help address labor market disruptions and the growing need for AI skills - Empower stakeholders and society to help build trust and reinforce democracy - Mitigate excessive power concentration - Take targeted actions to advance specific future AI benefits. Annex B contains the matrices with all identified risks, benefits and policy imperatives (not just the top 10)
-
Why did NVIDIA, the darling of the AI market, drop 2.5% today? The Biden administration dropped the mic (and some weighty export controls) on AI chips and models—arguably the most aggressive attempt yet to regulate the flow of transformational tech. Let’s break it down: 🧩 A Three-Tier System of Access ⏩ 🥇 Top Tier: AI flows freely for 19 nations (G7 + allies like Japan, South Korea, and Taiwan). 🥈 Middle Tier: Most of the world faces caps but can negotiate for more chips by aligning with US policy interests 🥉 Bottom Tier: China and Russia? Completely locked out—no chips, no dice, no exceptions. 🔐 Locks on AI’s Crown Jewels ⏩ Firms must keep 75% of their AI computing power in the U.S. or allied nations, with no more than 7% in any other country. Data center operators like Microsoft and Google will need accreditation to trade AI tech freely, tightly aligning with U.S. security goals. 🤖 New AI Model Parameters ⏩ For the first time, restrictions extend to the very DNA of AI: model weights. Overseas data centers must implement strict safeguards to protect this intellectual property. Officially, it’s about national security: keeping AI away from adversaries like China and Russia. But unofficially? It’s about locking in dominance. It’s a strategic move to control the future of AI innovation and adoption. Pushback is already fierce. Nvidia has called the rules “misguided,” warning that global buyers will pivot to non-U.S. suppliers. Restricting friendly nations like Israel, Mexico, and Switzerland could also strain diplomatic ties. And let’s not forget the unintended consequence: Balkanization of the AI ecosystem. Countries and companies excluded from the U.S.-led framework may double down on domestic R&D or turn to less-restricted alternatives (hello, China). That could erode America’s soft power over time. This is the tech Cold War. Chips are the new oil. Code is the new currency. If these controls stick, the big question is whether they will cement U.S. dominance—or just fuel the competition.
-
𝗠𝗬 𝗪𝗘𝗘𝗞 𝗜𝗡 𝗔𝗜 — Power, Policy & Product This week, $𝟭𝗧+ 𝗶𝗻 𝗱𝗲𝗮𝗹𝘀 𝗮𝗻𝗱 𝗿𝘂𝗹𝗲-𝗰𝗵𝗮𝗻𝗴𝗲𝘀 reminded us that AI is now a lever of statecraft 𝘢𝘯𝘥 a consumer feature—sometimes in the same news cycle. 🏛️ 𝗡𝗮𝘁𝗶𝗼𝗻𝘀 𝗱𝗼𝘂𝗯𝗹𝗲 𝗱𝗼𝘄𝗻 𝗦𝗮𝘂𝗱𝗶 𝗔𝗿𝗮𝗯𝗶𝗮 unveiled Humain—backed by 18k NVIDIA H200s and a new AMD-infrastructure—tying its AI future to U.S. silicon. Meanwhile, 𝗪𝗮𝘀𝗵𝗶𝗻𝗴𝘁𝗼𝗻 scrapped its “diffusion rule, (tiered chip-export cap)” favoring exports over restrictions—expect other capitals to follow, ordering now to avoid GPU scarcity. 𝗖𝗮𝗻𝗮𝗱𝗮 created the world’s first Cabinet-level AI ministry, while 𝗨.𝗦. 𝗹𝗮𝘄𝗺𝗮𝗸𝗲𝗿𝘀 floated a 10-year freeze on state AI laws. 𝙏𝙧𝙖𝙣𝙨𝙡𝙖𝙩𝙞𝙤𝙣: 𝘗𝘰𝘭𝘪𝘤𝘺 𝘪𝘴 𝘣𝘦𝘤𝘰𝘮𝘪𝘯𝘨 𝘢 𝘮𝘰𝘢𝘵. ♟️𝗕𝗶𝗴 𝗧𝗲𝗰𝗵 𝗿𝗲𝗮𝗹𝗹𝗼𝗰𝗮𝘁𝗲𝘀 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗹𝗼𝗻𝗴 𝗴𝗮𝗺𝗲 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁 cut 7,000 roles to fund model training and energy contracts. 𝗚𝗼𝗼𝗴𝗹𝗲 took the opposite route—pushing AI Mode to its homepage and launching an AI Futures Fund to keep developers close. 𝙈𝙚𝙨𝙨𝙖𝙜𝙚 𝙩𝙤 𝙢𝙖𝙧𝙠𝙚𝙩𝙨: 𝘈𝘐 𝘪𝘴 𝘯𝘰𝘸 𝘢 𝘒𝘗𝘐, 𝘯𝘰𝘵 𝘢𝘯 𝘦𝘹𝘱𝘦𝘳𝘪𝘮𝘦𝘯𝘵. 🛠️ 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗺𝗲𝗲𝘁𝘀 𝗶𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲 Export curbs eased, but U.S. execs warned: “𝗰𝗵𝗶𝗽𝘀 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗴𝗿𝗶𝗱 𝘂𝗽𝗴𝗿𝗮𝗱𝗲𝘀 𝗮𝗿𝗲 𝘀𝘁𝗿𝗮𝗻𝗱𝗲𝗱 𝗮𝘀𝘀𝗲𝘁𝘀.” Meanwhile, 𝗚𝗲𝗺𝗶𝗻𝗶 shipped into 250M cars and 𝗧𝗶𝗸𝗧𝗼𝗸 rolled out image-to-video generation. Yet, 𝗣𝗿𝗼𝗷𝗲𝗰𝘁 𝗦𝘁𝗮𝗿𝗴𝗮𝘁𝗲 (OpenAI/UAE megacenter) is already facing tariff overruns and risk pressures. 𝗪𝗵𝘆 𝗶𝘁 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: 🔹 𝘊𝘢𝘱𝘪𝘵𝘢𝘭 𝘧𝘰𝘭𝘭𝘰𝘸𝘴 𝘱𝘰𝘭𝘪𝘤𝘺 – chip deals now align with defense pacts. 🔹 𝘋𝘪𝘴𝘵𝘳𝘪𝘣𝘶𝘵𝘪𝘰𝘯 𝘣𝘦𝘢𝘵𝘴 𝘪𝘯𝘷𝘦𝘯𝘵𝘪𝘰𝘯 – whoever controls the endpoint, owns the next training dataset. 🔹 𝘊𝘰𝘮𝘱𝘶𝘵𝘦, 𝘱𝘰𝘸𝘦𝘳, 𝘵𝘢𝘭𝘦𝘯𝘵 = 𝘴𝘪𝘯𝘨𝘭𝘦 𝘴𝘵𝘳𝘢𝘵𝘦𝘨𝘺 – miss one and you fall behind. 𝗕𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲: The winners aren’t waiting for rules or finished models. They’re building full-stack AI strategies—now. #AI #Leadership #Strategy #Infrastructure #Policy 𝗦𝗼𝘂𝗿𝗰𝗲𝘀: • Google tests “AI Mode” on homepage – The Verge: https://lnkd.in/eVH77EGE • Apple’s Eddy Cue warns AI could displace Google – Fortune: https://lnkd.in/eJkSYt6x • Google launches an “AI Futures Fund” for startups – TechCrunch: https://lnkd.in/e2dg9iRj • TikTok debuts image-to-video generation – The Verge: https://lnkd.in/eEp7wWtz • Project Stargate faces funding, tariff headwinds – LinkedIn News: https://lnkd.in/eiXjYC3R • Data center costs up 5–15% – Bloomberg: https://lnkd.in/eb8VzTTh • Daniel Newman (The Futurum Group) on data center funding concerns at the Milken Conference: https://lnkd.in/ecJSmGjJ
-
The U.S. has long been a global tech leader. However, countries around the world are learning from our mistakes, especially when it comes to the societal impact of unregulated technologies. 🇩🇰 Denmark recently passed a law allowing individuals to copyright their faces, giving citizens more control over how their likeness is used in AI-generated content. 🇧🇷 Brazil has proposed sweeping AI legislation focused on transparency, discrimination, and accountability, warning against the kind of unmitigated platform power and surveillance creep that has taken root here. 🇪🇺 The EU AI Act is setting global precedent for risk-based regulation, while the U.S. still struggles to define what "responsible AI" even means at a policy level. Technology without guardrails can undermine democracy, exploit identity, and outpace public understanding more quickly than lawmakers can respond. In the U.S., we've treated innovation as a zero-sum game in which a select few dictate the rate and speed of technology, defining what constitutes acceptable use for all of us along the way. However, more and more, global leaders are recognizing that regulation can be a catalyst for innovation when it protects rights, equity, and public trust. The question is: How much harm are we willing to tolerate before we regulate in the US? #AIRegulation #TechPolicy #DigitalRights #EthicalAI #GlobalInnovation #ResponsibleTech
-
AI is rewriting the rules of innovation. But who owns the future? The USPTO just unveiled its AI Strategy (January 2025), a blueprint for navigating AI’s role in intellectual property. With patents, trademarks, and copyrights at stake, this is about more than just technology—it’s about who controls the next era of innovation. Here are 5 takeaways: 1️⃣ AI Won’t Own Inventions—Yet AI can assist in innovation, but human inventors remain at the center of patent law. The USPTO is firm: AI can’t be listed as an inventor, but AI-generated work may influence patentability. The legal line is being drawn. 2️⃣ Patent Reviews Are Getting Smarter The USPTO is using AI to examine AI—leveraging machine learning for prior art searches, classification, and decision-making. This means faster approvals, better accuracy, and a more scalable patent process for the future. 3️⃣ IP Protection vs. AI Creativity: The Collision Generative AI is churning out text, images, music, and designs—but who owns it? Trademark and copyright laws weren’t built for AI-generated content. The USPTO is working to define rights and responsibilities in an AI-driven creative economy. 4️⃣ The U.S. is Playing for Global AI Leadership AI innovation is a geopolitical race. The USPTO is working with international partners to shape global AI patent standards, ensuring U.S. leadership in AI regulation, enforcement, and competition. The message? Innovation without protection is just an idea. 5️⃣ AI for All, Not Just Tech Giants The USPTO wants AI-driven innovation to be accessible, not just locked up by billion-dollar companies. From startups to underrepresented inventors, AI tools and patent protections need to be inclusive and equitable—or we risk leaving brilliant minds behind. What’s the bottom line? AI is not just a technology—it’s an economic force. The USPTO is positioning the U.S. to lead the next chapter of AI innovation while ensuring IP laws evolve to keep up. But will regulations accelerate AI’s potential—or slow it down?
-
The announcement of sweeping new U.S. tariffs represents a pivotal moment for corporate boards across America. As directors, we now face a dramatically altered global trade environment with significant implications for AI governance and corporate strategy: 🔹 New Tariffs Reshape Global Supply Chains With rates ranging from 10% to 46% across major trading partners (China 34%, EU 20%, Japan 24%, Taiwan 32%, Korea 25%, Vietnam 46%), the economics of global AI technology procurement and deployment will fundamentally change. 🔹 AI Governance Implications These tariffs will have a broad impact across many industries and areas of business: - Technology procurement costs for AI infrastructure - Data sovereignty considerations for global operations - Supply chain resilience for critical AI components - Competitive positioning against international players 🔹 Strategic Board Considerations 1. Reassess AI investment priorities in light of changing cost structures 2. Evaluate opportunities to localize AI capabilities that were previously outsourced 3. Review AI governance frameworks to account for new geopolitical realities 4. Prepare scenario plans for potential reciprocal actions from trading partners In my opinion, the tariff strategy isn’t going away and appears designed for long-term revenue generation rather than just a negotiating stance, and this means boards should prepare for the long haul rather than a temporary disruption. This moment calls for a proactive and meaningful reassessment of AI governance structures within a rapidly changing global context. How is your board preparing to adapt? #BoardDirectors #AIGovernance #Tariffs #AI
-
As #AI reshapes #healthcare, #legislation and #policy are shifting at both state and #federal levels. California’s proactive stance reflects a strong focus on #governance, #equity, and #transparency, while federal directives, including the recent Executive Orders emphasize deregulation to promote AI innovation. This federal approach encourages faster development and adoption of AI tools by reducing regulatory barriers. While this fosters #innovation, it also places more responsibility on health systems and individual states to ensure ethical implementation and patient safety in the absence of stricter federal regulatory frameworks. Here's what health systems can do now: 1. Establish Patient Consent Processes: - Create AI-specific consent forms that clearly explain how AI tools are used in diagnosis, treatment, and administrative tasks. - Highlight human oversight and data usage to build trust. 1. Strengthen Internal Governance Structures: - Develop robust internal policies to manage AI implementation and align with evolving state and federal regulations. - Invest in transparency and data governance to mitigate risks. 3. Educate and Empower Your Workforce: - Train staff on AI ethics, capabilities, and limitations. - Emphasize explainable AI #XAI and equip clinicians with the tools to explain AI-powered decisions to patients. 4. Conduct Ethical Risk Assessments: - Regularly assess AI tools for biases, equity concerns, and patient safety risks to stay ahead of potential regulations. Deregulation provides exciting opportunities for AI to accelerate innovation, but it also raises the stakes for healthcare systems to ensure ethical, transparent, and patient-centered use of technology. Summary of current federal and California legislation below - find your state's information at https://lnkd.in/gXP8iXF5 #artificialintelligence #technology #patientsafety #aitransparency #aiethics