The Oregon Department of Justice released new guidance on legal requirements when using AI. Here are the key privacy considerations, and four steps for companies to stay in-line with Oregon privacy law. ⤵️ The guidance details the AG's views of how uses of personal data in connection with AI or training AI models triggers obligations under the Oregon Consumer Privacy Act, including: 🔸Privacy Notices. Companies must disclose in their privacy notices when personal data is used to train AI systems. 🔸Consent. Updated privacy policies disclosing uses of personal data for AI training cannot justify the use of previously collected personal data for AI training; affirmative consent must be obtained. 🔸Revoking Consent. Where consent is provided to use personal data for AI training, there must be a way to withdraw consent and processing of that personal data must end within 15 days. 🔸Sensitive Data. Explicit consent must be obtained before sensitive personal data is used to develop or train AI systems. 🔸Training Datasets. Developers purchasing or using third-party personal data sets for model training may be personal data controllers, with all the required obligations that data controllers have under the law. 🔸Opt-Out Rights. Consumers have the right to opt-out of AI uses for certain decisions like housing, education, or lending. 🔸Deletion. Consumer #PersonalData deletion rights need to be respected when using AI models. 🔸Assessments. Using personal data in connection with AI models, or processing it in connection with AI models that involve profiling or other activities with heightened risk of harm, trigger data protection assessment requirements. The guidance also highlights a number of scenarios where sales practices using AI or misrepresentations due to AI use can violate the Unlawful Trade Practices Act. Here's a few steps to help stay on top of #privacy requirements under Oregon law and this guidance: 1️⃣ Confirm whether your organization or its vendors train #ArtificialIntelligence solutions on personal data. 2️⃣ Validate your organization's privacy notice discloses AI training practices. 3️⃣ Make sure organizational individual rights processes are scoped for personal data used in AI training. 4️⃣ Set assessment protocols where required to conduct and document data protection assessments that address the requirements under Oregon and other states' laws, and that are maintained in a format that can be provided to regulators.
Importance of Consent in AI Training
Explore top LinkedIn content from expert professionals.
Summary
Understanding the importance of consent in AI training is crucial as it ensures ethical data use, protects privacy, and builds trust in AI systems. Consent refers to obtaining clear, explicit permission from individuals before using their personal information to train artificial intelligence models.
- Obtain explicit permissions: Always seek affirmative consent before using personal data for AI training and clearly outline how the data will be used.
- Respect opt-out and withdrawal rights: Provide individuals with the ability to opt-out or revoke their consent, and ensure their data is deleted upon request.
- Prioritize data transparency: Maintain clear documentation and communication about data sources and how personal data is used in AI systems to promote trust and compliance.
-
-
How AI Companies Can Win—Without Stealing Training Data 🤖⚖️ Let’s face it: the fastest path to powerful AI has often looked like this: 1. Scrape the internet 2. Train on everyone’s work—without consent 3. Profit But that model is already facing lawsuits, creator backlash, and growing public distrust. Here’s the good news: there’s a smarter, more sustainable way to build competitive AI—without stealing. Here’s how: ✅ Pay for Data (Fairly) Compensate writers, artists, and developers for licensing their work. Create opt-in platforms where people want to share their data in exchange for real value. ✅ Partner with Institutions Work with universities, research labs, and creators directly to access curated, high-quality, domain-specific datasets. ✅ Use Synthetic Data Wisely Generative models can train each other—by simulating conversations, edge cases, or even entire environments. Less human exploitation, more innovation. ✅ Reward Community Contributions Build ecosystems where users voluntarily contribute data in return for perks, credits, or co-ownership. Think: GitHub meets Patreon meets AI. ✅ Invest in Transparency Make it easy to audit where training data comes from. If your AI is built ethically, show it off. Ethical AI isn’t a PR stunt—it’s a competitive advantage. The next generation of leaders won’t just be the most powerful… they’ll be the most trusted. What would make you feel good about contributing to an AI training dataset? #ResponsibleAI #EthicalAI #DataEthics #GenerativeAI #CreatorEconomy #AITraining #Transparency #AIForGood #InnovationWithIntegrity
-
During seed round due diligence, we found a red flag: the startup didn’t have rights to the dataset used to train its LLM and hadn’t set up a privacy policy for data collection or use. AI startups need to establish certain legal and operational frameworks to ensure they have and maintain the rights to the data they collect and use, especially for training their AI models. Here are the key elements for compliance: 1. Privacy Policy: A comprehensive privacy policy that clearly outlines data collection, usage, retention, and sharing practices. 2. Terms of Service/User Agreement: Agreements that users accept which should include clauses about data ownership, licensing, and how the data will be used. 3. Data Collection Consents: Explicit consents from users for the collection and use of their data, often obtained through clear opt-in mechanisms. 4. Data Processing Agreements (DPAs): If using third-party services or processors, DPAs are necessary to define the responsibilities and scope of data usage. 5. Intellectual Property Rights: Ensure that the startup has clear intellectual property rights over the collected data, through licenses, user agreements, or other legal means. 6. Compliance with Regulations: Adherence to relevant data protection regulations such as GDPR, CCPA, or HIPAA, which may dictate specific requirements for data rights and user privacy. 7. Data Anonymization and Security: Implementing data anonymization where necessary and ensuring robust security measures to protect data integrity and confidentiality. 8. Record Keeping: Maintain detailed records of data consents, privacy notices, and data usage to demonstrate compliance with laws and regulations. 9. Data Audits: Regular audits to ensure that data collection and usage align with stated policies and legal obligations. 10. Employee Training and Policies: Training for employees on data protection best practices and establishing internal policies for handling data. By having these elements in place, AI startups can help ensure they have the legal rights to use the data for training their AI models and can mitigate risks associated with data privacy and ownership. #startupfounder #aistartup #dataownership
-
The Risks of Misinformation in AI: Lessons from Healthcare for AdTech and MarTech. A recent study from NYU highlights a critical vulnerability in large language models (LLMs): if even 0.001% of training data is poisoned with misinformation, the integrity of the entire model can be compromised. While the study focuses on biomedical LLMs” where the stakes are life and death” it serves as a stark warning for all industries leveraging AI. In healthcare, unconsented, low-quality, or maliciously injected data can lead to disastrous outcomes, compromising patient safety and eroding trust. The research underscores the importance of data provenance, transparency, and stringent safeguards in training AI models. This cautionary tale is highly relevant for the AdTech and MarTech industries. Just as healthcare professionals must ensure that AI tools don’t hallucinate medical advice, marketing professionals must ensure AI-driven decisions don’t propagate biased or tainted insights. Poorly sourced or un-consented data can erode consumer trust, violate privacy regulations, and undermine brand integrity issues that are only becoming more critical as privacy regulations tighten and third-party data becomes obsolete. The healthcare sector’s rigorous focus on accuracy and data compliance could be a leading indicator for the future of AdTech and MarTech. Consider these parallels: Consent is King: In healthcare, patient data must be protected and shared transparently. Similarly, marketing leaders must adopt robust consent frameworks to build consumer trust. Provenance Matters: Just as healthcare LLMs require clean, verifiable training data, marketing models must be free of unverified or compromised data sources. Risk Amplification: Misinformation in an LLM doesn’t just stop at flawed outputs, it perpetuates systemic failures across its entire ecosystem, whether it’s diagnosing illnesses or driving ad spend. At the core of this challenge lies a fundamental truth: privacy, accuracy, and performance must coexist in any AI-powered system. As industries like healthcare push the boundaries of safeguarding sensitive data, it’s a call to action for marketing and advertising to take the same approach building models that not only drive results but also uphold ethical standards. The path forward isn’t going to be easy, but it’s clear: data transparency, quality assurance “PROOF”, and compliance are no longer optional. They’re the foundation for innovation, trust, and long-term success. #IABALM2025 #privacy #consent #proofofprovenance #advertising #marketing Precise.ai Qonsent https://lnkd.in/egjwwps2
-
21/86: 𝗜𝘀 𝗬𝗼𝘂𝗿 𝗔𝗜 𝗠𝗼𝗱𝗲𝗹 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗼𝗻 𝗣𝗲𝗿𝘀𝗼𝗻𝗮𝗹 𝗗𝗮𝘁𝗮? Your AI needs data, but is it using personal data responsibly? 🛑Threat Alert: If your AI model trains on data linked to individuals, you risk: Privacy violations, Legal & regulatory consequences, and Erosion of digital trust. 🔍 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝘁𝗼 𝗔𝘀𝗸 𝗕𝗲𝗳𝗼𝗿𝗲 𝗨𝘀𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗶𝗻 𝗔𝗜 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 📌 Is personal data necessary? If not essential, don't use it. 📌 Are unique identifiers included? Consider pseudonymization or anonymization. 📌 Do you have a legal basis? If the model uses PII, document your justification. 📌 Are privacy risks documented & mitigated? Ensure privacy impact assessments (PIAs) are conducted. ✅ What You Should Do ➡️ Minimize PII usage – Only use personal data when absolutely necessary. ➡️ Apply de-identification techniques – Use pseudonymization, anonymization, or differential privacy where possible. ➡️ Document & justify your approach – Keep records of privacy safeguards & compliance measures. ➡️ Align with legal & ethical AI principles – Ensure your model respects privacy, fairness, and transparency. Privacy is not a luxury, it’s a necessity for AI to be trusted. Protecting personal data strengthens compliance, ethics, and public trust in AI systems. 💬 How do you ensure AI models respect privacy? Share your thoughts below! 👇 🔗 Follow PALS Hub and Amaka Ibeji for more AI risk insights! #AIonAI #AIPrivacy #DataProtection #ResponsibleAI #DigitalTrust