Data Protection Practices

Explore top LinkedIn content from expert professionals.

  • View profile for Sid Trivedi

    Partner at Foundation Capital

    16,810 followers

    $400M – that’s the price tag when sensitive #data ends up in the wrong hands. On May 11th, Coinbase – the largest US-based #crypto exchange (100M+ users, $330B in assets) – received a ransom demand for $20M. A threat actor claimed to have internal account documentation and customer data. Coinbase has refused to pay, instead boldly offering a $20M reward for information on the attackers. Coinbase’s May 14th SEC disclosure revealed the troubling root cause: overseas support agents were bribed to leak customer data, enabling targeted social-engineering attacks. While passwords and private keys appear safe, personal details – emails, phone numbers, addresses, government IDs, and account data – might have been compromised. The company is estimating a cost of $180M-$400M for remediation and voluntary customer reimbursements relating to the incident. This breach underscores a critical truth: insider access to sensitive data remains a massive, underestimated threat. Coinbase’s detection tools worked – identifying unauthorized access and firing the responsible individuals months earlier – but the data had already escaped. Identity management, DLP, and proactive data monitoring have never mattered more. AI agents add powerful new capabilities but also complicate the risk picture. If you’re a #founder building solutions around identity, insider risk, or data protection, I’d love to connect.

  • View profile for Murtuza Lokhandwala

    Project Manager @ Team Computers | Aspiring Compliance & Risk Management | Cybersecurity | IT Infrastructure & Operations | Service Delivery | IT Service Management (ITSM) | Information Security

    5,320 followers

    Think Before You Share: The Hidden Cybersecurity Risks of Social Media 🚨🔐 In an era where data is the new currency, every post, check-in, or status update can serve as an intelligence goldmine for cybercriminals. What seems like harmless sharing—your vacation photos, workplace updates, or even a "fun fact" about your first pet—can be weaponized against you. 🔥 How Oversharing Exposes You to Cyber Threats 🔹 Geo-Tagging & Real-Time Location Leaks Sharing your location makes you an easy target. Cybercriminals use this data to track routines, monitor absences, or even launch physical security threats such as home burglaries. 🔹 Social Engineering & Credential Harvesting Those "what’s your mother’s maiden name?" or "which city were you born in?" quiz posts are a hacker’s playground. Attackers scrape these responses to guess password security questions or craft highly convincing phishing emails. 🔹 Metadata & Digital Fingerprinting Every photo you upload contains EXIF metadata (including GPS coordinates and device details). Attackers can extract this information, identify locations, and even map out behavior patterns for targeted cyberattacks. 🔹 OSINT (Open-Source Intelligence) Reconnaissance Threat actors don’t need sophisticated hacking tools when your social media profile provides a full dossier on your life. They correlate job roles, connections, and public interactions to execute whaling attacks, corporate espionage, or deepfake impersonations. 🔹 Dark Web Data Correlation Your exposed social media details can be cross-referenced with breached databases. If your credentials have been compromised in past data leaks, attackers can launch credential stuffing attacks to hijack your accounts. 🔐 Cyber-Hygiene: Best Practices for Social Media Security ✅ Restrict Profile Visibility – Limit exposure by setting profiles to private and segmenting audiences for sensitive updates. ✅ Sanitize Metadata Before Uploading – Use tools to strip EXIF data from images before posting. ✅ Implement Multi-Factor Authentication (MFA) – Enforce adaptive authentication to prevent unauthorized account access. ✅ Zero-Trust Mindset – Assume any publicly shared data can be aggregated, exploited, or weaponized against you. ✅ Monitor for Breach Exposure – Regularly check if your credentials are compromised using breach notification services like Have I Been Pwned. 🔎 The Internet doesn’t forget. Every post contributes to your digital footprint—control it before someone else does. 💬 Have you ever reconsidered a social media post due to security concerns? Drop your thoughts below! 👇 #CyberSecurity #SocialMediaThreats #Infosec #PrivacyMatters #DataProtection #Phishing #CyberSecurity #ThreatIntelligence #ZeroTrust #CyberThreats #infosec #cybersecuritytips #cybersecurityawareness #informationsecurity #networking #networksecurity #cyberattacks #CyberRisk #CyberHygiene #CyberThreats #ITSecurity #InsiderThreats #informationtechnology #technicalsupport

  • View profile for Darren Grayson Chng

    Regional Director | Privacy, AI, Cyber | Former Regulator | AI Law & IEEE AI Peer Reviewer | ISO 42001, AIGP

    9,701 followers

    In a panel discussion some months ago, I said that anonymising personal data (PD) was a promising means of facilitating cross-border data flows, because once you can no longer identify an individual from the data, it should no longer fall within many countries' definition of PD, and would not be subject to export requirements. Someone asked if it was worth talking about anonymisation since it is reversible. I wasn't sure if it was a rhetorical question or philosophical question (if anonymisation is reversible, has there really been anonymisation?), or maybe we ascribed different meanings to the term, as do many jurisdictions. But in one of my LI posts I said that I would write more about it. This is what this post is about. When I use the term "anonymisation", I'm referring to the PDPC's definition of it i.e. the process of converting PD into data that cannot identify any particular individual, and can be reversible or irreversible. And to me, anonymisation is not like flipping a light switch such that you get light or no light, 1 or 0. There are factors that can affect re-identification of a person, e.g. - the number of direct and indirect identifiers in the dataset - what other data the organisation has access to (including publicly accessible info), which, when combined with the dataset, could re-identify the individual - what measures the organisation takes to reduce the possibility of re-identification e.g. restricting access to / disclosure of the dataset. So I think it is not so useful to think about anonymisation in binary terms. What I suggest we should think about is the possibility of re-identification. Think of a dimmer switch instead. When you see the cleartext dataset, that's when the light is on. When you start turning the dial down - the more direct and indirect identifiers you remove, the more safeguards you implement vis-a-vis the dataset - the dimmer the light gets, the possibility of re-identification is reduced. If you (a) remove all direct and indirect identifiers from a dataset, (b) encrypt it, and (c) only give need-to-know employees read access, I think the light's going to be pretty dim. It might no longer be PD, meaning that you can export it without being subject to export requirements. So yes, I think that anonymisation is a promising means of facilitating CBDF. Do note that you should also apply the "motivated intruder" test and consider if someone who has the motive to attempt re-identification, is reasonably competent, has access to appropriate resources e.g. internet, and uses investigative techniques, could re-identify the individual (see the ICO's excellent draft guidance https://lnkd.in/gmRfXj-W, and The Guardian's 2019 article https://lnkd.in/gZXfkHVC).

  • View profile for Brian Burnett

    Director of Enterprise Security | CC, SOC for Cybersecurity EnCE, ACE, CCFE

    2,891 followers

    I keep hearing leaders say, "Investment in Cybersecurity is expensive and just another cost center." That is not reality, it's an investment in your organization's ability to operate. Here is just one example to show some numbers and the cost difference between pro-active versus reactive cybersecurity: The cost difference between proactive cybersecurity and reactive cybersecurity is significant, as proactive measures aim to prevent threats before they occur, while reactive measures address incidents after they have happened. Here’s a detailed example to illustrate the cost difference: Scenario: A Mid-Sized Business Business Type: E-commerce company Size: 250 employees Annual Revenue: $50 million Cybersecurity Threat: Ransomware attack 1. Proactive Cybersecurity Costs Proactive measures include investing in tools, training, and services to prevent cyberattacks. Expense Estimated Annual Cost Endpoint Protection Software$25,000 Regular Penetration Testing$30,000 Cybersecurity Awareness Training$15,000 Managed Security Service Provider $50,000 Backup and Disaster Recovery Plan$20,000 Total Annual Proactive Costs$140,000 By implementing these measures, the business can significantly reduce the likelihood of successful attacks and minimize downtime in the event of an incident. 2. Reactive Cybersecurity Costs Reactive measures are taken after an attack has occurred. Let’s assume a ransomware attack encrypts critical data, halting operations for five days. Expense Estimated Cost Ransom Payment $250,000 Incident Response Team$50,000 Forensics and Investigation $40,000 Downtime Costs (5 days, lost revenue) $685,000 Legal Fees and Compliance Fines $100,000 Reputational Damage and PR Recovery $150,000 Identity Protection for Customers $75,000 Total Reactive Costs$1,350,000 The above costs DO NO account for long-term revenue loss due to brand damage, potential lawsuits, or customer churn, which could escalate further. Cost Comparison Approach Cost Proactive Measures $140,000/year Reactive Response $1,350,000+ Key Takeaways Proactive cybersecurity is a fraction of the cost of responding to an incident. Investments in prevention not only save money but also protect a business's reputation and customer trust. Organizations that prioritize proactive measures can avoid the cascading effects of a cybersecurity breach. This example demonstrates how "an ounce of prevention is worth a pound of cure" when it comes to cybersecurity.

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,343 followers

    Today, National Institute of Standards and Technology (NIST) published its finalized Guidelines for Evaluating ‘Differential Privacy’ Guarantees to De-Identify Data (NIST Special Publication 800-226), a very important publication in the field of privacy-preserving machine learning (PPML). See: https://lnkd.in/gkiv-eCQ The Guidelines aim to assist organizations in making the most of differential privacy, a technology that has been increasingly utilized to protect individual privacy while still allowing for valuable insights to be drawn from large datasets. They cover: I. Introduction to Differential Privacy (DP): - De-Identification and Re-Identification: Discusses how DP helps prevent the identification of individuals from aggregated data sets. - Unique Elements of DP: Explains what sets DP apart from other privacy-enhancing technologies. - Differential Privacy in the U.S. Federal Regulatory Landscape: Reviews how DP interacts with existing U.S. data protection laws. II. Core Concepts of Differential Privacy: - Differential Privacy Guarantee: Describes the foundational promise of DP, which is to provide a quantifiable level of privacy by adding statistical noise to data. - Mathematics and Properties of Differential Privacy: Outlines the mathematical underpinnings and key properties that ensure privacy. - Privacy Parameter ε (Epsilon): Explains the role of the privacy parameter in controlling the level of privacy versus data usability. - Variants and Units of Privacy: Discusses different forms of DP and how privacy is measured and applied to data units. III. Implementation and Practical Considerations: - Differentially Private Algorithms: Covers basic mechanisms like noise addition and their common elements used in creating differentially private data queries. - Utility and Accuracy: Discusses the trade-off between maintaining data usefulness and ensuring privacy. - Bias: Addresses potential biases that can arise in differentially private data processing. - Types of Data Queries: Details how different types of data queries (counting, summation, average, min/max) are handled under DP. IV. Advanced Topics and Deployment: - Machine Learning and Synthetic Data: Explores how DP is applied in ML and the generation of synthetic data. - Unstructured Data: Discusses challenges and strategies for applying DP to unstructured data. - Deploying Differential Privacy: Provides guidance on different models of trust and query handling, as well as potential implementation challenges. - Data Security and Access Control: Offers strategies for securing data and controlling access when implementing DP. V. Auditing and Empirical Measures: - Evaluating Differential Privacy: Details how organizations can audit and measure the effectiveness and real-world impact of DP implementations. Authors: Joseph Near David Darais Naomi Lefkovitz Gary Howarth, PhD

  • View profile for Rosalia Anna D'Agostino

    Research and Marketing Associate at SpiritLegal | Head of the Interview Team at Legal4Tech | AI and Technology Expert

    8,940 followers

    🚨BREAKING: Expert Report on LLMs The report by Isabel Barberá and Murielle Popa-Fabre analyses the risks to privacy and data protection posed by LLMs. It applies Con. 108+ for the Protection of Individuals with regard to Automatic Processing of Personal Data of the Council of Europe. 🚨 Findings: ‘privacy risks in LLM-based systems cannot be adequately addressed through ad-hoc organisational practices or existing compliance tools alone’, but a method to assess and mitigate risks must be deployed throughout the entire life-cycle of an #LLM - risk mitigation focuses on: ❌ LLM architecture: reduce size/context, deduplication of training dataset - less effective strategies ✅ Life-cycle: takes into account data-related and output risks, implements cybersecurity at all levels - and it’s in line with international standards! 🎙️In breaking down LLM tech, three data-usage phases can be identified: 1️⃣ Web-scraping and pretraining 2️⃣ Fine-tuning 3️⃣ Optimisation through data augmentation (RAG), agentic workflows 👉🏼 Best practices can be successfully implemented in Phase 2 - so that LLMs are privacy-fit when entering Phase 3, which involves vetting customer intentions and forming a working memory 🎙️ The report breaks down risks: 👉🏼 at #model level: LLMs define the relationship between data subject and personal data by the proximity that one data vector bears to the source vector for that data. Awareness of such relation is not implied, but statistical. Vector proximity depends on how multiple features relatable to a vector are aggregated in LLM training ‼️ Risks include: LLM pretraining through personal data scraped off the internet (no legal bases), data regurgitation, hallucinations, bias amplification 👉🏼 at #system level: depending on how LLMs interact with their environment - risks hog beyond privacy and impact upon autonomy, identity. Lastly, without human oversight, LLM-automated decisions defies Art. 9 of Con. 108+, while the likelihood of accurate profiling, also addressed in Art.9, becomes a threat given the amount of information that LLM are able to collect due to their increasing multimodal application ‼️ Risk management also takes into account user interference in interaction, post-deployment adaptations Risk mitigation evaluation framework: 📌 Reflect real-world deployment condition 📌 Multiple re-assessments (ISO 42005) 📌 Address emergent and interactive risks - not just performance metrics 📌 Involve stakeholders 📌 Accessible evaluation reports 💡 The RMEF should be piloted in a multi-stakeholder collaboration whereby an LLM is built, deployed, interacted with, assessed 🎙️Recommendations to stakeholders: 👉🏼 Work on data protection AND data safety: the two don’t equate 👉🏼 Implement privacy protection on day 0 👉🏼 Use PETs and implement data protection benchmarks 🚨 Regulators must issue clear guidance to help companies address these risks! CC: Peter Hense 🇺🇦🇮🇱 Itxaso Domínguez de Olazábal, PhD.

  • View profile for Jan Beger

    Global Head of AI Advocacy @ GE HealthCare

    84,916 followers

    This paper presents international consensus-based recommendations to address and mitigate potential harms caused by bias in data and algorithms within AI health technologies and health datasets. It emphasizes the need for diversity, inclusivity, and generalizability in these technologies to prevent the exacerbation of health inequities. 1️⃣ The paper emphasizes the importance of including a plain language summary in dataset documentation, detailing the data origin, and explaining the reasons behind the dataset's creation to help users assess its suitability for their needs. 2️⃣ It is highlighted that dataset documentation should provide a summary of the groups present, explain the categorization of these groups, and identify any groups that are missing or at risk of disparate health outcomes. 3️⃣ The paper addresses the need to identify and describe biases, errors, and limitations in datasets, including how missing data and biases in labels are handled, to improve the generalizability and applicability of the data. 4️⃣ The importance of adhering to data protection laws, ethical governance, and the involvement of patient and public participation groups in dataset documentation is stressed to ensure ethical use of health data. 5️⃣ Recommendations are provided for ensuring that datasets are used appropriately in the development of AI health technologies, including reporting on dataset limitations and evaluating the performance of AI technologies across different groups. ✍🏻 The STANDING Together collaboration. Recommendations for Diversity, Inclusivity, and Generalisability in Artificial Intelligence Health Technologies and Health Datasets. Version 1.0 published 30th October 2023. DOI: 10.5281/zenodo.10048356 Xiao Liu, MBChB PhD, Alastair Denniston, Joseph Alderman, Elinor Laws, Jaspret Gill, Neil Sebire, Marzyeh Ghassemi, Melissa McCradden, Melanie Calvert, Rubeta Matin, Dr Stephanie Kuku, MD, Jacqui Gath, Russell Pearson, Johan Ordish, Darren Treanor, Negar Rostamzadeh, Elizabeth Sapey, Stephen Pfohl, Heather Cole-Lewis, PhD, Francis Mckay, Alan Karthikesalingam MD PhD, Charlotte Summers, Lauren Oakden-Rayner, Bilal A Mateen, Katherine Heller, Maxine Mackintosh ✅ Sign up for my newsletter to stay updated on the most fascinating studies related to digital health and innovation: https://lnkd.in/eR7qichj

  • View profile for Ed Santow

    Co-Director of the Human Technology Institute and Professor of Responsible Technology at UTS

    11,460 followers

    Privacy & data protection regulators from around the world have issued a joint statement on data scraping. This is a big deal... When a company engages in data scraping, they automatically 'hoover up' data from websites and use it for their own purposes. Individuals whose personal information is gathered this way generally have no opportunity to consent or object. As the Office of the Australian Information Commissioner said, data scraping technologies "raise significant privacy concerns as these technologies can be exploited for purposes including monetisation through reselling data to third-party websites, including to malicious actors, private analysis or intelligence gathering." For example, data scraping was at the heart of our privacy regulator's finding that facial recognition company, Clearview AI, breached Australian privacy law. In their statement on data scraping, the regulators emphasised four key points: 1. personal information on a public website is still subject to privacy law 2. social media companies and other website operators must protect personal information on their platforms from unlawful data scraping 3. mass data scraping incidents that harvest personal information can constitute reportable data breaches in many jurisdictions 4. individuals can also take steps to protect their personal information from data scraping, and social media companies have a role to play in enabling users to engage with their services in a privacy protective manner. The statement was endorsed by privacy regulators in Australia, Canada, the UK, Hong Kong, Mexico, New Zealand and elsewhere. https://lnkd.in/ey-5X9vw

  • A competitor told our prospect: "We can do it for 40% less." Here's what happened next. They chose the cheaper vendor. And 12 months later, they called us back. Their "budget-friendly" BPO had just suffered a security breach. Customer data was compromised. And when they asked their provider to investigate, they got a response that made their stomachs drop: "We don't have monitoring capabilities. There's no way to track what happened or who accessed the data." No audit trails. No security protocols. No accountability. What started as a cost-saving decision had become a compliance nightmare, a PR crisis, and a potential lawsuit rolled into one. They thought they'd saved 40%, but the real math probably looked like this: 💲 Initial "savings": $50K annually 💲 Legal fees and compliance remediation: $200K+ 💲 Lost customers: $300K+ in lifetime value Total cost of "cheap": Over $500K for a decision that was supposed to save them money. When they came back to Peak Support, the first question wasn't about price. It was: "Can you show us your security monitoring dashboard?" That conversation happened more than four years ago. They've been our client ever since. Here's what we've learned: The cheapest option is rarely the least expensive. When you're handling customer data, customer relationships, and your brand reputation, cutting corners doesn't cut costs—it multiplies them. Before you choose your next customer service partner, ask: ❓ What security certifications do you maintain? ❓ How do you safeguard customer data? ❓ How do you monitor agent activity? ❓ What's your incident response protocol? ❓ What insurance coverage do you carry for data breaches? The brands that succeed long-term don't just ask "How much?" They ask "How safe?" What's the most expensive "cheap" decision you've seen in customer service?

  • View profile for SYED MUNEEB SHAH

    Cyber Security Analyst | Digital Forensics| Vulnerability Assessment

    9,563 followers

    🔍 Social Media OSINT: The Modern Intelligence Goldmine In today’s digital-first world, social media platforms are not just communication tools — they are vast intelligence sources. From individuals to corporations, we leave behind digital footprints that can be leveraged by both defenders (Blue Teams, SOCs, investigators) and adversaries (threat actors, scammers, hackers). 📌 What is Social Media OSINT? Open-Source Intelligence (OSINT) on social media refers to the process of collecting and analyzing publicly available information from platforms like: Facebook X (Twitter) LinkedIn Instagram TikTok Reddit Niche forums & regional platforms This data, once correlated, can reveal personal details, behavioral patterns, organizational structures, and even vulnerabilities. 🛠️ Key Techniques in Social Media OSINT 1. Profile Mapping – Gathering data on usernames, bios, employment history, geotags, and friends/followers. 2. Geolocation Analysis – Tracking posts, photos, and hashtags to identify locations. 3. Relationship Mapping – Identifying social connections and networks for threat profiling. 4. Content Timeline Analysis – Studying posting habits to predict availability or routines. 5. Username Correlation – Cross-referencing handles across multiple platforms to tie identities together. 6. Hashtag & Trend Monitoring – Detecting movements, protests, or coordinated disinformation campaigns. ⚠️ Risks of Social Media Exposure For individuals: Identity theft, stalking, spear-phishing. For organizations: Credential leaks, insider threats, impersonation scams. For governments: Foreign intelligence gathering, disinformation warfare. #OSINT #CyberSecurity #SocialMediaOSINT #ThreatIntelligence #BlueTeam #RedTeam #Infosec #DigitalForensics #CyberDefense #DataPrivacy #IncidentResponse #CyberThreats #Hacking #CyberAwareness #SecurityOperations #CyberRisk #InfoSecCommunity #SOC #OpenSourceIntelligence #Privacy

    • +12

Explore categories