"Collecting, storing, using, and sharing people’s sensitive information without their informed consent violates their privacy, and exposes them to substantial secondary harms like stigma, discrimination, physical violence, and emotional distress. The Federal Trade Commission will not stand for it" - says FTC in new blog post recapping its actions in Avast, X-Mode and InMarket. Key points re some common themes: 🔹 Browsing and location data are sensitive. Full stop. 🔹 Browsing and location data paint an intimate picture of a person’s life, including their religious affiliations, health and medical conditions, financial status, and sexual orientation. 🔹 What makes the underlying data sensitive springs from the insights they reveal and the ease with which those insights can be attributed to particular people. 🔹 Years of research shows that datasets often contain sensitive and personally identifiable information even when they do not contain any traditional standalone elements of PII, and re-identification gets easier every day—especially for datasets with the precision of those at issue 🔹 People have no way to object to—let alone control—how their data is collected, retained, used, and disclosed when these practices are hidden from them. 🔹 When a developer incorporates a company’s code into their app through an SDK, that developer amplifies any privacy risks inherent in the SDK by exposing their app’s users to it. 🔹 Data handling must align with the purposes for which it was collected. 🔹 Purpose matters: Firms do not have free license to market, sell, and monetize people’s information beyond purposes to provide their requested product or service. 🔹 Any safeguards used to maintain people’s privacy are often outstripped by companies’ incentives and abilities to match data to particular people - make sure that you control the sharing and use of data by your downstream. 🔹 Promises and contract clauses are important, but they must be backed up by action. 🔹 Firms should not let business model incentives that focus on the bottom line outweigh the need for meaningful privacy safeguards. #dataprivacy #dataprotection #privacyFOMO https://lnkd.in/eAuTmutG
Risks of Inadequate Data Privacy Practices
Explore top LinkedIn content from expert professionals.
Summary
Failing to implement strong data privacy practices can lead to significant risks, including the misuse of sensitive personal information, security breaches, and loss of trust. These practices involve protecting how data is collected, processed, stored, and shared to ensure individuals’ privacy and compliance with legal standards.
- Limit data collection: Only collect personal data that is essential for the specific purpose and avoid gathering unnecessary or excessive information to reduce risks of breach and misuse.
- Prioritize user transparency: Clearly communicate how data is collected, used, and stored, and provide users with accessible options to manage their privacy preferences or revoke consent.
- Monitor and secure data: Regularly audit data storage and third-party practices while implementing robust security measures like encryption and access controls to safeguard sensitive information.
-
-
We're kicking off our deep dive on AI risks and internal controls by diving into the first privacy concern: 𝘂𝗻𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝘇𝗲𝗱 𝗱𝗮𝘁𝗮 𝗰𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝘂𝘀𝗮𝗴𝗲. ❌ 𝗧𝗵𝗲 𝗥𝗶𝘀𝗸: AI systems can collect personal or sensitive data without individuals’ knowledge or consent. This includes scraping publicly available information, repurposing data for unintended uses, and failing to inform users about how their data will be processed or stored. ✅𝗧𝗵𝗲 𝗖𝗼𝗻𝘁𝗿𝗼𝗹𝘀: To mitigate this risk, organizations should implement controls across the entire data lifecycle—from collection to processing to secure deletion—using a four-pronged approach: 🧾 𝗣𝗼𝗹𝗶𝗰𝗶𝗲𝘀 & 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 - Establish and enforce clear data collection, usage, and retention policies - Require Data Protection Impact Assessments before deploying AI tools - Mandate transparency documentation for all AI models that use personal data ✒️ 𝗖𝗼𝗻𝘀𝗲𝗻𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 - Obtain informed, explicit consent for data use - Provide clear, accessible privacy notices at the point of data collection - Allow users to opt out or revoke consent easily 📊 𝗗𝗮𝘁𝗮 𝗠𝗶𝗻𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 & 𝗔𝗻𝗼𝗻𝘆𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 - Collect only data that is strictly necessary for the AI model’s purpose - Apply de-identification or anonymization techniques - Regularly review data sets to purge unnecessary or outdated information 🔎 𝗢𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁 & 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 - Conduct regular audits of data collection practices - Monitor third-party data sources and vendors for compliance - Implement data usage logs and alerts to detect misuse By putting the right controls in place—across policies, consent, data handling, and monitoring—you can reduce the risk of unauthorized data collection and build more trustworthy AI systems. Remember, it’s not just about what your AI can do—it’s about what it 𝙨𝙝𝙤𝙪𝙡𝙙 do with people’s data. 🦦 𝗕𝗲𝗳𝗼𝗿𝗲 𝘆𝗼𝘂 𝗱𝗶𝘃𝗲 𝗯𝗮𝗰𝗸 𝗶𝗻𝘁𝗼 𝘆𝗼𝘂𝗿 𝗱𝗮𝘆, 𝗮𝘀𝗸 𝘆𝗼𝘂𝗿𝘀𝗲𝗹𝗳: - Do we know exactly what data our AI systems are collecting—and why? - Are users fully informed and empowered to control their own data? - Have we reviewed whether the data we store is still necessary—or should it be purged? - What safeguards do we have if a third-party vendor mishandles data? Thoughtful questions today help prevent privacy headlines tomorrow. Stay tuned—next week, we’ll explore the murky waters of 𝗱𝗮𝘁𝗮 𝘀𝘁𝗼𝗿𝗮𝗴𝗲 𝗮𝗻𝗱 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆. #internalaudit #audit #auditforward #swimwithaudie #auditsmarter #AI #ArtificialIntelligence #AuditingAI #AuditTheFuture #AuditingAI