How To Evaluate Third-Party Data Privacy Practices

Explore top LinkedIn content from expert professionals.

Summary

Evaluating third-party data privacy practices ensures that external vendors handle sensitive information responsibly, aligning with your organization’s data protection standards and compliance requirements.

  • Conduct thorough assessments: Evaluate third-party vendors by reviewing their data privacy policies, certifications like SOC 2 or ISO standards, and their approach to managing sensitive data.
  • Monitor ongoing compliance: Regularly review vendor security practices, incident response protocols, and audit results to ensure continued adherence to agreed data protection measures.
  • Establish clear governance: Define roles and responsibilities for managing vendor data privacy, including tracking their data usage and obtaining necessary consents for data collection and sharing.
Summarized by AI based on LinkedIn member posts
  • View profile for Sam Castic

    Privacy Leader and Lawyer; Partner @ Hintze Law

    3,712 followers

    A court recently let a California CCPA class action lawsuit proceed against a company for its website's use of Google Analytics. Here's what to know and do ⬇️ A federal district court in California allowed a CCPA #ClassAction to survive a motion to dismiss. The defendant offers a website-based service for connecting people to mental health therapists, and allegedly allowed #GoogleAnalytics to collect information like mental health conditions entered into its website. Google offered an IP address anonymization feature that defendant allegedly didn't use.    The court ruled that the CCPA claim under its limited private right of action (Cal Civ Code § 1798.150) could proceed even though there was no data breach. It reasoned that a data breach isn't required--a claim could proceed if personal information is subject to unauthorized disclosure as a result of the business's failure to maintain reasonable security procedures (presumably the use of the Google IP address anonymization feature). While this isn't a ruling on the merits, the fact that the CCPA allows statutory damages of $100-$750 per consumer/incident (or actual damages if greater) could lead to claims against other companies on this theory for using cookies, pixels, and other tracking technologies for common business practices like #TargetedAdvertising and #website #analytics.   What should your company do? Here's four steps to consider: 1️⃣ Don't panic. This case isn't a ruling on the merits, and it's not clear this theory will ultimately prevail. 2️⃣ Assessments. Validate that your privacy or tracking technology assessment processes: 🔹Identify what data is passed by each tracking technology; 🔹Determine whether all data need to be passed & remove any that don't; and 🔹Use privacy-protective tracking technology provider tools and settings (Know what team at your company identifies what options are available, and determine whether they have the privacy knowledge to know what to look for and use. Reviews of providers’ documentation and settings are often needed.). 3️⃣ Governance. Establish or validate an approach to governing the use of tracking technologies on your company's website and mobile #apps, including: 🔹Keeping an up to date understanding of the technologies used and business purposes they serve; 🔹Knowing what specific data types are passed; 🔹Triggering reviews or re-assessments when there are changes to data passed or business purposes the technologies are used for; and 🔹Getting buy-in and alignment on roles and responsibilities with stakeholders that can place, use, or configure the technologies. 4️⃣ Consider Consent. Especially when website/app events or other data types passed could reveal something sensitive, obtain opt-in consent before allowing the data to be transmitted. This is viewed as required by the FTC, and is required under some of the state comprehensive #privacy laws.

  • View profile for Patrick Sullivan

    VP of Strategy and Innovation at A-LIGN | TEDx Speaker | Forbes Technology Council | AI Ethicist | ISO/IEC JTC1/SC42 Member

    10,202 followers

    🗺️ISO42001: The Path Forward for Third-Party Risk Management in AI🗺️ Third-party risk has always mattered, but AI raises the stakes. When external vendors build, train, or deploy AI systems, your exposure is no longer limited to data handling practices. These systems can influence customer experience, content generation, decision logic, and other high-impact areas. Managing that risk requires more than due diligence forms. It requires a system. Microsoft’s Supplier Security and Privacy Assurance (#SSPA) program shows what that system can look like. Suppliers that process Microsoft confidential data, personal data, or operate AI systems must meet clearly defined Data Protection Requirements. These requirements are not limited to general privacy practices, they also outlines expectations specific to AI systems. Suppliers can meet these expectations in several ways. One option is to complete an independent assessment that aligns with Microsoft’s own criteria. Another option is to provide certification to recognized international standards, like #ISO42001 certification. This recognition is not a formality. It reflects an understanding that ISO42001 provides the structure needed to govern AI risk at scale. An AI Management System built on ISO42001 offers what legacy vendor programs often lack. It defines who is responsible for AI oversight and requires impact assessments on individuals, groups, and societies. It connects those assessments to documented risk treatment plans. It forces alignment between internal controls and the controls needed from third parties. And it requires organizations to produce a statement of applicability that justifies what has been included or excluded. When these requirements are applied to vendors, it creates a shared foundation for risk evaluation. This is where #ISO27036-1 becomes useful. That standard outlines how to manage information security across supplier relationships focusing on accountability, transparency, and lifecycle engagement. Together, ISO42001 and ISO27036 provide the internal structure and the external coordination model needed for effective third-party AI oversight (#TPRM). Microsoft’s program uses these same principles. Suppliers must document how they manage AI incidents, how they train personnel, how they test for failure modes, and how they align AI use with responsible practices. These expectations are not loose guidelines. They reflect a disciplined model that treats AI risk as a business risk, not just a technical one. ISO42001 gives other organizations the same foundation creating the operational clarity that third-party AI oversight demands. It does not replace the need for careful vendor selection, but it gives that process structure, repeatability, and accountability. Microsoft has acted on this. You should learn from that example. Now is the time to stop asking if a vendor is ethical and start asking if their systems are governed. Just as important, are yours? A-LIGN

  • View profile for Jodi Daniels

    Practical Privacy Advisor / Fractional Privacy Officer / AI Governance / WSJ Best Selling Author / Keynote Speaker

    19,735 followers

    In AI tools, the fine print isn’t optional. It’s everything. Recently checked out a cool new AI tool that promised awesome graphics. First red flag? No mention of data use, privacy or security on the site. Second red flag? Reading the terms of service, it said it takes no responsibility - it's all the LLMs it uses. Third red flag? Same terms say it can use the data for its own use. Fourth red flag? Same terms specifically state do not upload confidential information. Even if my content would be outward facing, I don't want to knowingly share my information to a third party who then shares it with LLMs and uses it for themselves. This was just my simple one AI tool review. Managing AI privacy risks is critical for all companies to do, no matter the size. Here are 5 tips to help manage AI risk: 1. Strengthen Your Data Governance Create a cross-functional team to develop clear policies on AI use cases. Consider third-party data access and usage, how AI will be used within the business, and if it involves sensitive data. Pro Tip: Use frameworks like NIST’s Data Privacy Framework to guide your efforts. 2. Conduct Privacy Impact Assessments (PIAs) for AI Review your existing PIA processes to determine if AI can be integrated into the assessment process. Assess AI-specific risks like bias, ethics, discrimination, and data inferences often made by AI models. 3. Train Your Team on AI Transparency Develop ongoing training programs to increase awareness of AI and how it intersects with privacy and employee roles. 4. Address Privacy Rights Challenges Posed by AI Determine how you will uphold privacy rights once data is embedded in a model. Consider how you will handle requests for access, portability, rectification, erasure, and processing restrictions. Remember, privacy notices should include provisions about how AI is used. 5. Manage Third-Party AI Vendors Carefully Ask vendors where they get their AI model, what kind of data is used to train the AI, and how often they refresh their data. Determine how vendors handle bias, inaccuracies, or underrepresentation in the AI’s outputs. Audit AI vendors and contracts regularly to identify new risks.   AI’s potential is immense, but so are the challenges it brings.   Be proactive. Build trust. Stay ahead.   Learn more in our carousel and blog link below 👇

  • View profile for AD E.

    GRC Visionary | Cybersecurity & Data Privacy | AI Governance | Pioneering AI-Driven Risk Management and Compliance Excellence

    10,107 followers

    #30DaysOfGRC 28 Vendor assessments are not just about checking who has SOC 2. They are about understanding who has access to your data, what they do with it, and whether they take that responsibility seriously. When evaluating a vendor, go beyond the certificate. Ask for their data flow diagrams. Review how they handle incidents. Find out who actually reviews their security reports internally. Look at how often they test their controls, not just if they say they have them. A strong vendor today could become a weak link tomorrow if no one is watching. Make sure your assessment process actually reduces risk and is not just a formality. #30DaysofGRC #ThirdPartyRisk #VendorRiskManagement #GRC #RiskAssessment #Compliance #Cybersecurity #Infosec #TPRM #GovernanceMatters

Explore categories