🚨 On November 27, 2023, the Beijing Internet Court issued a ruling on China’s first copyright infringement case involving an AI-generated image, setting a major precedent for AI and copyright law. 📅 Timeline: •Feb 2023: Plaintiff creates an AI-generated image using Stable Diffusion. Plaintiff shares the image on Little Red Book. •Mar 2, 2023: Defendant uses the image in an article without permission. •Plaintiff files lawsuit for copyright infringement. 🔑 Key Issues: •Can AI-generated images be copyrighted? •Who holds the copyright: the user or the AI? =The Beijing court ruled yes to copyright, affirming the user’s ownership. 🧑⚖️ Court’s Findings: • The image reflected the plaintiff’s intellectual efforts (Plaintiff provided a video and extensive prompt and parameter documentation). • It possessed originality. • The plaintiff was recognized as the copyright holder, not the AI or the system developer. 📜Outcome: • Defendant was found guilty of copyright infringement and was ordered to issue a public apology and awarded the plaintiff 500 yuan (~$70) for economic losses. This ruling could influence future copyright cases for AI-generated works in China and possibly globally. The court emphasized the importance of the plaintiff’s intellectual input—specific prompts, model adjustments, and selections, which were pivotal in recognizing the image’s originality and the plaintiff’s authorship. 🔗 https://lnkd.in/eXD_BrxF
Understanding Legal Precedents in Artificial Intelligence
Explore top LinkedIn content from expert professionals.
Summary
Dive into the evolving intersection of artificial intelligence and law as legal systems grapple with issues like copyright in AI-generated content, regulatory frameworks, and AI's role in intellectual property and free speech. This critical legal territory is shaping the future of innovation, ethics, and fairness in AI-powered technologies.
- Evaluate intellectual input in AI creations: Courts are beginning to recognize human input, such as specific prompts or adjustments, as grounds for copyright over AI-generated works.
- Understand global AI regulations: Familiarize yourself with regional AI laws like the EU’s AI Act, which introduces risk-based classifications and transparency requirements, as these standards may influence global practices.
- Protect your AI interactions: Treat AI-generated outputs and chat transcripts as discoverable business records by implementing retention policies, training staff, and carefully managing sensitive data input.
-
-
https://lnkd.in/g5ir6w57 The European Union has adopted the AI Act as its first comprehensive legal framework specifically for AI, effective from July 12, 2024. The Act is designed to ensure the safe and trustworthy deployment of AI across various sectors, including healthcare, by setting harmonized rules for AI systems in the EU market. 1️⃣ Scope and Application: The AI Act applies to all AI system providers and deployers within the EU, including those based outside the EU if their AI outputs are used in the Union. It covers a wide range of AI systems, including general-purpose models and high-risk applications, with specific regulations for each category. 2️⃣ Risk-Based Classification: The Act classifies AI systems based on their risk levels. High-risk AI systems, especially in healthcare, face stringent requirements and oversight, while general-purpose AI models have additional transparency obligations. Prohibited AI practices include manipulative or deceptive uses, though certain medical applications are exempt. 3️⃣ Innovation and Compliance: To support innovation, the AI Act includes provisions like regulatory sandboxes for testing AI systems and exemptions for open-source AI models unless they pose systemic risks. High-risk AI systems must comply with both the AI Act and relevant sector-specific regulations, like the Medical Device Regulation (MDR) and the In Vitro Diagnostic Medical Device Regulation (IVDR). 4️⃣ Global Impact and Challenges: The AI Act may influence global AI regulation by setting high standards, and its implementation within existing sector-specific regulations could create complexities. The evolving nature of AI technology necessitates ongoing updates to the regulatory framework to balance innovation with safety and fairness.
-
Should AIs have the right to free speech? Buckle up - we're diving into one of the most controversial debates in tech right now. Not a theoretical late-night-over-beers discussion, but one that will reshape everything from AI regulation to consciousness itself! The first amendment right to free speech is a cornerstone of America. And we've now given the keys of the human OS — language — to AIs. Free speech is a big part of today’s tech and politics — from Elon's Twitter takeover to Meta's $25M settlement to Trump for suspending his accounts. But AI just added an explosive twist, which will affect everything from algorithms to liabilities and ad marketing. Since 1791, the First Amendment has protected free speech under one crucial assumption: that speakers *understand* what they're saying. But what happens when the "speaker" is an AI that doesn't actually understand anything? Here's the problem: LLMs like ChatGPT don't "think" or "know" what they're saying. They generate words through probability, not intent or understanding. Yet their 'content' reaches millions daily. A fascinating legal analysis by Mackenzie A. and Max Levy in the Stanford Law Review exposes this gray area: 1. Existing precedents don't clearly address AI speech - Courts have ruled corporations have First Amendment rights (Citizens United v. FEC, 2010), but these involve human agency - The Supreme Court has protected algorithmic outputs (Sorrell v. IMS Health, 2011) as speech - However, in Denver Area Educational Telecommunications Consortium, Inc. v. FCC (1996), the Court suggested fully automated systems lack constitutional rights 2. Potential consequences of granting AI free speech rights - Tech companies could avoid liability for misinformation, deepfakes, and manipulation - Platforms might claim immunity under Section 230, equating AI content with user speech - This could set precedent where AI, not humans, shields false narratives 3. Implications of denying AI speech protections - Could enable broad censorship of search results, news summaries, and more - May stifle AI-driven innovations in journalism, education, and legal analysis - Risk creating a chilling effect where human speech using AI tools faces scrutiny Having worked extensively with AI systems, I've seen how they can generate brilliant insights one moment and concerning misinformation the next - often in ways we can't predict. While the First Amendment has evolved with technology, AI presents an unprecedented challenge: a "speaker" that can influence millions without understanding its own words. Ultimately our First Amendment right, boils down to intent, understanding and agency. The big question becomes: how much of this can and should we ascribe to AI (now and in the future?) What do you think? Should AI-generated content have the same protections as human speech? Especially curious to hear from those working in tech or law! 💭 #entrepreneurship #AI #technology
-
If you want to understand the state of the law underlying AI developers’ claims of fair use in the pending copyright cases involving generative AI, this new article by Prof. Pam Samuelson is required reading. It’s a soup-to-nuts analysis of existing case law involving three successive waves of new and disruptive technological uses of in-copyright works. Anticipating that the fulcrum in these cases is likely to be the fourth factor of the fair use analysis — market effects — she examines how past cases have defined the standard for proof of market harm; why courts have rejected the circular argument that willingness to license negates fair use; and how courts account for the public benefits of challenged uses in order to vindicate the constitutional purpose of copyright. https://lnkd.in/e-RZFBrd
-
Today, I want to discuss a topic that’s rapidly reshaping the landscape of both technology and intellectual property (IP) law: Generative AI. I'm going to take a firm stance here — we must tread carefully to ensure that innovation thrives while creators are fairly rewarded. In this era of AI-generated art, music, literature, and video, failing to address this issue could derail the essence of human creativity and innovation. The IP Dilemma: A Gray Area The burning question is: Who owns the rights to this generated content? As it stands, IP law is ill-equipped to handle this nuance. Traditional IP law was designed under the assumption that the creator is a human. AI-generated works fall into a legal gray area. The article below from Benedict Evans presents why it's so complicated. Evidence and Research: A Wake-Up Call 1. Stanford's Study on AI & Law: A Stanford study revealed that only a small percentage of legal professionals believe that existing IP law can adequately address the concerns of AI-generated content. 2. European Parliament's Report: Their recent report on IP and AI stressed the need for a comprehensive approach that balances innovation and creators' rights. 3. AI-Generated Art Sales: Last year, an AI-generated artwork sold for $432,500 at Christie's auction house. This incident made it clear that AI-generated works have significant commercial value, yet the IP concerns remain unaddressed. My Stance: AI is a tool for Creators, not a Creator in Itself I firmly believe that generative AI should be treated as a 'tool' rather than a 'creator.' In the same way that Adobe Photoshop doesn't own the rights to the art it helps produce, the AI should not hold rights to the content it generates. Instead, the rights should go to the human operator who guided the AI’s creative process. AND if that human operator simply copies other people’s work, then we have a legal precedent for how to deal with that. Here's why: - Human Ingenuity: Even the most advanced AI systems require a human touch — parameters must be set, input data must be selected, and the results must be curated. - Precedents in Creative Tools: Tools like the piano or a paintbrush don’t own the rights to the art they help create. This notion should extend to AI systems as well. - Encouragement of Innovation: Defining AI as a tool would mean that innovation is rewarded and not stifled. It would encourage more creators to utilize AI for advanced applications, enriching the creative landscape. Why now? We must revisit and adapt IP laws to handle the issues that generative AI brings to the table. This is not just a legal challenge, but also an ethical one. We’re at a pivotal juncture where law and technology intersect, and the decisions we make now will dictate the trajectory of human creative endeavors for years to come. Would love to hear your thoughts on this 💬 #GenerativeAI #Innovation #creativity #HumanIngenuity
-
You’re reviewing a contract. You pop open your favorite AI chat and type: “What’s a more aggressive indemnity clause here?” A few follow-ups, some back-and-forth, and you've got a solid draft. Fast-forward a few months. That contract is now in dispute. And guess what? Opposing counsel wants your AI chat history. Scary? It should be. In Tremblay v. OpenAI, a federal court confirmed what many feared: AI prompts and outputs can be discoverable. Courts are starting to treat AI transcripts just like emails or memos, i.e. business records subject to eDiscovery. And GenAI isn’t like traditional legal research tools Lexis or Westlaw. These chats often contain: - Client-specific facts - Draft language - Internal legal reasoning ...and are likely not formal work product Here’s what legal teams should do now: 1/ Create a GenAI retention policy, just like you have for emails 2/ Train staff to treat chats like email: intentional, professional, retrievable 3/ Avoid “scratchpad” use for sensitive or strategic work What do you folks think?
-
🧠 “Data systems are designed to remember data, not to forget data.” – Debbie Reynolds, The Data Diva 🚨 I just published a new essay in the Data Privacy Advantage newsletter called: 🧬An AI Data Privacy Cautionary Tale: Court-Ordered Data Retention Meets Privacy🧬 🧠 This essay explores the recent court order from the United States District Court for the Southern District of New York in the New York Times v. OpenAI case. The court ordered OpenAI to preserve all user interactions, including chat logs, prompts, API traffic, and generated outputs, with no deletion allowed, not even at the user's request. 💥 That means: 💥“Delete” no longer means delete 💥API business users are not exempt 💥Personal, confidential, or proprietary data entered into ChatGPT could now be locked in indefinitely 💥Even if you never knew your data would be involved in litigation, it may now be preserved beyond your control 🏛️ This order overrides global privacy laws, such as the GDPR and CCPA, highlighting how litigation can erode deletion rights and intensify the risks associated with using generative AI tools. 🔍 In the essay, I cover: ✅ What the court order says and why it matters ✅ Why enterprise API users are directly affected ✅ How AI models retain data behind the scenes ✅ The conflict between privacy laws and legal hold obligations ✅ What businesses should do now to avoid exposure 💡 My recommendations include: • Train employees on what not to submit to AI • Curate all data inputs with legal oversight • Review vendor contracts for retention language • Establish internal policies for AI usage and audits • Require transparency from AI providers 🏢 If your organization is using generative AI, even in limited ways, now is the time to assess your data discipline. AI inputs are no longer just temporary interactions; they are potentially discoverable records. And now, courts are treating them that way. 📖 Read the full essay to understand why AI data privacy cannot be an afterthought. #Privacy #Cybersecurity #datadiva#DataPrivacy #AI #LegalRisk #LitigationHold #PrivacyByDesign #TheDataDiva #OpenAI #ChatGPT #Governance #Compliance #NYTvOpenAI #GenerativeAI #DataGovernance #PrivacyMatters
-
A very useful Global AI Law comparison from Oliver Patel: "As the global AI race heats up, take stock of the 3 main players. This snapshot focuses on laws which a) apply across the whole jurisdiction and b) apply to companies developing & using AI. Comprehensive AI law 🇪🇺 ✅ AI Act applies across EU 🇨🇳 ❌ National AI law in development 🇺🇸 ❌ No comprehensive federal AI law Narrow AI laws 🇪🇺 ✅ Digital Services Act, Product Liability Directive etc. 🇨🇳 ✅ Deep Synthesis Regulations, Generative AI Services Measures etc. 🇺🇸 ✅ National AI Initiative Act, Removing Barriers to American AI Leadership etc. Regional or local laws 🇪🇺 ❌ AI Act creates harmonised legal regime 🇨🇳 ✅ Regional laws in Shenzhen & Shanghai 🇺🇸 ✅ AI laws in California, Colorado, Utah etc. Technical standards 🇪🇺 ❌ CEN/CENELEC technical standards in development 🇨🇳 ✅ TC260 published standard on generative AI security 🇺🇸 ✅ NIST AI Risk Management Framework Promoting AI innovation 🇪🇺 ✅ AI Act regulatory sandboxes & SME support 🇨🇳 ✅ Strategy to be the global AI leader by 2030 🇺🇸 ✅ New Executive Order strongly prioritises AI innovation Trade and/or export controls 🇪🇺 ✅ Restrictions on export of dual use technology 🇨🇳 ✅ Updated export control regulations restrict AI related exports 🇺🇸 ✅ Restrictions on exports of advanced chips & model weights Prohibited AI 🇪🇺 ✅ AI practices prohibited (e.g., emotional recognition in the workplace) 🇨🇳 ✅ Prohibitions on which AI systems can be used in public facing applications 🇺🇸 ❌ Although various AI uses would be illegal, there are no explicit prohibitions High-risk AI 🇪🇺 ✅ Various AI systems classified as high-risk, including AI used in recruitment 🇨🇳 ✅ Generative AI systems for public use considered high-risk 🇺🇸 ❌ No specific high-risk AI systems in U.S. federal law AI system approval 🇪🇺 ✅ 3rd party conformity assessment required for certain high-risk AI systems 🇨🇳 ✅ Government approval required before public release of LLMs 🇺🇸 ✅ FDA approval required for AI medical devices Development requirements 🇪🇺 ✅ Extensive requirements for high-risk AI system development 🇨🇳 ✅ Detailed requirements for development of public facing generative AI 🇺🇸 ❌ No explicit AI development requirements in U.S. federal law Transparency & disclosure 🇪🇺 ✅ Extensive requirements in AI Act 🇨🇳 ✅ Content labelling required for deepfakes 🇺🇸 ✅ FTC enforces against unfair & deceptive AI use Pubic registration of AI 🇪🇺 ✅ Public database for high-risk AI systems 🇨🇳 ✅ Central algorithm registry for certain AI systems 🇺🇸 ❌ No general requirements to register AI systems AI literacy requirements 🇪🇺 ✅ AI Act requires organisations to implement AI literacy 🇨🇳 ❌ No corporate AI literacy requirements, but schools must teach AI 🇺🇸 ❌ No corporate AI literacy requirements"
-
The Future of Privacy Forum (FPF) analyzes trends in U.S. state legislation on AI regulation in areas impacting individuals' livelihoods such as healthcare, employment, and financial services. 🔎 Consequential Decisions - Many state laws target AI systems used in "consequential decisions" that affect essential life opportunities. These include sectors like education, housing, and healthcare. 🔎 Algorithmic Discrimination: Legislators are concerned about AI systems leading to discrimination. Some proposals outright ban discriminatory AI use, while others impose a duty of care to prevent such bias. 🔎 Developer and Deployer Roles: Legislation often assigns different obligations to AI developers (those who create AI systems) and deployers (those who use them). Both may be required to ensure transparency and conduct risk assessments. 🔎 Consumer Rights: Commonly proposed rights for consumers include the right to notice, explanation, correction of errors, and appeals against automated decisions. 🔎 Technology-Specific Regulations: Some laws focus on specific AI technologies like generative AI and foundation models, requiring transparency and safety measures, including AI-generated content labeling. This report can help companies look at what obligations might be seen as 'trends' that they can use to forecast future requirements. e.g. 🔹 Obligations 🔹 ----------------- 👉 Transparency: Developers and deployers are often required to provide clear explanations about how AI systems work. 👉 Assessments: Risk assessments and audits are used to evaluate potential AI biases and discrimination risks. 👉 Governance Programs: AI governance programs are encouraged to oversee AI systems, ensuring they meet legal and ethical standards. #airegulation #responsibleai Future of Privacy Forum, Ryan Carrier, FHCA, Khoa Lam, Jeffery Recker, Jovana Davidovic, Borhane Blili-Hamelin, PhD, Dr. Cari Miller, Heidi Saas, Patrick Sullivan
-
📢 European Law Institute Council Approves Interim Report on EU Consumer Law and Automated Decisionmaking (ADM) Timely report with the rise of Chatbots and the use of AI systems for contracting . . . From the European Law Institute (ELI) - "The pervasive use of algorithms and AI learning systems in transactional contexts raises the pressing need to address the legal challenges of autonomous contracts and reconsider the basis of contract law in AI-dominated scenarios. . . ." "The ELI Report provides eight general principles that should guide the adaptation of existing EU consumer law to ADM. It also reviews the ADM-readiness of key EU consumer law directives for the conclusion of contracts through the use of AI by consumers and/or traders, with recommendations for clarifications and additions to improve their ADM-readiness. Christoph Busch Marie Jull Sørensen Teresa Rodríguez de las Heras Ballell Dariusz Szostek Christian Twigg-Flesner Pascal Pichonnaz Anne Birgitte Gammeljord Geoffrey Vos European Law Institute (ELI) The American Law Institute https://lnkd.in/eJsRKfsH