The Identity Theft Resource Center recently reported a 312% spike in victim notices, now reaching 1.7 billion for 2024. AI is transforming identity theft from something attackers did manually to full-scale industrialized operations. Look at what happened in Hong Kong: a clerk wired HK$200M to threat actors during a video call where every participant but one was an AI-generated deepfake. Only the victim was real. Here’s what you need to know 👇 1. Traditional authentication won’t stop these attacks. Get MFA on everything, prioritize high-value accounts. 2. Static identity checks aren't enough—switch to continuous validation. Ongoing monitoring of access patterns is essential after users log in. 3. Incident response plans have to address synthetic identity threats. Focus your response on critical assets. 4. Some organizations are using agentic AI to analyze identity settings in real time, catching out-of-place activity that basic rules miss. Passing a compliance audit doesn’t mean you’re protected against these attacks. The old “authenticate once” mindset needs to move to a model where verification is continuous and context-aware. If your organization is seeing similar threats, how are you adapting to push back against AI-driven identity attacks? #Cybersecurity #InfoSec #ThreatIntelligence
How to Combat Identity Fraud in Business
Explore top LinkedIn content from expert professionals.
Summary
Identity fraud in business is the exploitation of stolen or fabricated identities to deceive organizations, often for financial or informational gain. With the rise of AI and sophisticated cybercrime tactics, protecting against identity fraud requires proactive measures beyond traditional security practices.
- Adopt continuous monitoring: Implement systems that track behaviors and access patterns in real-time to detect and prevent suspicious activities beyond the initial login.
- Strengthen identity verification: Use advanced authentication methods like biometrics and multi-factor authentication (MFA) to secure both human and machine identities.
- Educate and train teams: Regularly educate employees on recognizing scams such as phishing and deepfakes, reinforcing security protocols for sensitive actions like financial transactions.
-
-
AI is the New Insider Threat – And It’s Already Inside the Building Once upon a time, insider threats were disgruntled employees, careless users, or rogue contractors. Now? They don’t even need to exist. AI-powered identity theft is changing the game. Attackers are no longer just phishing employees—they’re impersonating them, deepfaking voices, cloning credentials, and bypassing security with terrifying accuracy. It’s no longer about who you trust, but what you trust. And while businesses scramble to integrate AI into decision-making, attackers are using it to automate fraud, bypass security, and exploit human and machine identities. The result? An identity landscape more vulnerable than ever. Three Trends That Should Terrify Every CISO Right Now: 🔹 Deepfake Impersonation Attacks Are Getting Smarter – AI-generated voices, emails, and even video calls make it nearly impossible to distinguish real employees from fake ones. (Your boss just called? Are you sure it was them?) 🔹 Machines Are the New Humans – AI bots, service accounts, and machine identities now outnumber human users in many organizations. Attackers know this—and they’re stealing, abusing, and compromising them faster than security teams can respond (so give them more grace). 🔹 Zero Trust is No Longer Optional – Traditional security models assumed trust based on credentials. That’s not enough anymore. Every request, every user (human or machine), every access point must be verified. How to Fight Back Against AI-Powered Identity Theft: ✅ Adopt Continuous Behavioral Monitoring – If identity can be faked, behavior is harder to spoof. Look for anomalies in user and machine actions. ✅ Reinforce Authentication Beyond MFA – Hardware tokens, biometric verification, and AI-driven risk analysis are must-haves. ✅ Secure Machine Identities – Don’t just protect human logins—monitor API keys, bots, and service accounts with the same level of scrutiny. ✅ Train Employees to Spot AI-Powered Attacks – Teach teams how deepfake social engineering works—because if it looks and sounds real, they’ll fall for it. We’re entering a world where "trust, but verify" is no longer enough. It’s verify everything, trust nothing. #AIThreats #InsiderThreat #CyberSecurity #ZeroTrust #HumanRisk
-
One thing people get wrong: Assuming scams only happen to the gullible. Recently Crypto expert Vitalik Buterin, creator of Ethereum, was subject to a SIM Swap attack. Vitalik’s account was overtaken by scammers who posted a fake NFT giveaway prompting users to click a malicious link and creating losses of nearly $700,000. 🐟🐟🐟 What is a SIM Swap? SIM swapping or simjacking is a technique used to gain control of a victim’s mobile phone number. Historically, this meant physically swapping the SIM in the device, but today, that’s much harder. It’s much more likely that this is a case of sim swapping where a fraudster socially engineered the Telco to allow them to take over the number (e.g., pretending to be Vitalik, saying they lost their phone). With control of the number, users can then use 2 Factor Authentication or “2FA” like one-time SMS codes to log in to social or bank accounts. 🐟🐟🐟 For consumers, having your phone number stored on X/Twitter is probably not a good idea. Where possible, always use 2FA apps like Google Authenticator. 🐟🐟🐟 For businesses: 1. Invest in stronger account recovery processes. Don't allow someone to reset their password via just a phone number. Further ensure that the password reset is coming from a previously trusted deviceId or IP associated with the user. 2. Check recent SIM changes: there are services that allow you to check if a phone number's SIM mapping was recently changed. 3. Check the Device ID: If the password reset request is coming from a new IP or new deviceId, then you can even send the user through a selfie liveness check as a 3rd factor step up – and you can then match the current selfie with a previous selfie for the user. 🐟🐟🐟 Sardine provides a fraud platform for orchestrating all of this - device ID binding, SIM to phone mapping and selfie step up. We’re always happy to help, drop us a DM if we can :) #fraud #simswap #scam #scamdetection
-
Last week, 2 major announcements seemed to rock the identity world: The first one: A finance worker was tricked into paying $26M after a video call with deepfake creations of his CFO an other management team members. The second one: An underground website claims to use neural networks to generate realistic photos of fake IDs for $15. That these happened should not be a surprise to anyone. In fact, as iProov revealed in a recent report, deepfake face swap attacks on ID verification systems were up 704% in 2023 and I am sure that the numbers in 2024 so far are only getting worse. Deepfakes, injection attacks, fake IDs, it is all happening. Someone asked me if identity industry is now worthless because of these developments and the answer is absolutely not. There is no reason to be alarmist. Thinking through these cases, it becomes obvious that the problem is with poor system design and authentication methodologies: - Storing personal data in central honeypots that are impossible to protect - Enabling the use of the data for creating synthetic identities and bypassing security controls - Using passwords, one time codes and knowledge questions for authentication - Not having proper controls for high risk, high value, privileged access transactions Layering capabilities like: - Decentralized biometrics can help an enterprise maintain a secure repository of identities that can be checked against every time someone registers an account. (For example, for duplicates, synthetic identities and blocked identities.) If you just check a document for validity and don't run a selfie comparison on the document, or check the selfie against an existing repository, you could be exposing yourself to downstream fraud. - Liveness detection and injection detection can eliminate the risk of presentation attacks and deepfakes at onboarding and at any point in the authentication journey. - Biometrics should be used to validate a transaction and 2 or more people should be required to approve a transaction above a certain amount and/or to a new payee. In fact, adding a new payee or changing account details can also require strong authentication. And by strong authentication, I mean biometrics, not one time codes, knowledge questions or other factors that can be phished out of you. It goes back to why we designed the Anonybit solution the way we did. (See my blog from July on the topic.) Essentially, if you agree that: - Personal data should not be stored in centralized honeypots - Biometrics augmented with liveness and injection detection should be the primary form of authentication - The same biometric that is collected in the onboarding process is what should be used across the user journey Then Anonybit will make sense to you. Let's talk. #digitalidentity #scams #deepfakes #generativeai #fraudprevention #identitymanagement #biometricsecurity #privacymatters #innovation #privacyenhancingtechnologies
-
Fraud grows unchecked without anyone noticing? That's exactly what happened to one of my clients. Because his businesses basic internal controls were non-existent, allowing a single employee to process payments, reconcile accounts, and destroy evidence without oversight. Then we helped him, here’s how: 1️⃣ Segregation of Duties – Strategically divide financial responsibilities so no single person controls multiple critical functions, creating natural checks and balances that make fraud exponentially more difficult. 2️⃣ Authorization Hierarchy – Establish clear approval thresholds and verification protocols for transactions, ensuring appropriate scrutiny based on risk and materiality. 3️⃣ Documentation Standards – Implement rigorous record-keeping requirements that create audit trails for every significant transaction, eliminating gaps where impropriety can hide. 4️⃣ Independent Reconciliation – Deploy regular account reconciliations performed by someone other than the transaction processor, catching discrepancies before they become systemic problems. 5️⃣ Periodic Internal Audits – Conduct surprise reviews of financial processes and transactions, creating accountability and deterrence through unpredictable oversight. The results? ✅ Fraud risk reduced by 94% ✅ Operational errors decreased by 76% ✅ Stakeholder confidence strengthened Later, the business owner confessed: "I trusted completely and verified never. I didn't realize that internal controls aren't about suspicion, they're about creating systems that protect everyone, including honest employees." Strong internal controls make fraud difficult and detection inevitable. Weak controls create temptation and opportunity. I help businesses implement effective internal controls without bureaucratic complexity. DM "Controls" to safeguard your financial future. #internalcontrols #finance #accounting
-
Phishing incidents have gone up 856% in the last year and we're seeing the impact. It seems that every week there is a new ransomware or data breach that was the result of a compromised credential. As a result, identity security is top of mind for most technology and security teams. HYPR customers are proven to reduce account takeover (ATO) by more than 98%. Here is how it's done: 1. Eliminate shareable credentials wherever possible by deploying phishing resistant passwordless MFA across your identity stores. 2. Implement a credential reset and enrollment process that is protected against social engineering attacks. Relying on KBA and other share-able methods is a weak link in the chain. 3. Correlate identity data and signals across your identity silos and enforce real-time step up in the form of authentication or identity verification. Remember, in today's AI enabled threat landscape, organizations must be able to not just verify accounts securely, but also identities. Stay safe out there friends!
-
𝗜𝗻 𝗝𝘂𝗹𝘆, 𝗮 𝗡𝗼𝗿𝘁𝗵 𝗞𝗼𝗿𝗲𝗮𝗻 𝗵𝗮𝗰𝗸𝗲𝗿 𝗽𝗼𝘀𝗲𝗱 𝗮𝘀 𝗮𝗻 𝗜𝗧 𝘄𝗼𝗿𝗸𝗲𝗿 and duped a cybersecurity company into hiring him. 𝙉𝙤𝙬 𝙩𝙝𝙚𝙮’𝙧𝙚 𝙪𝙨𝙞𝙣𝙜 𝙚𝙭𝙩𝙤𝙧𝙩𝙞𝙤𝙣 𝙖𝙨 𝙖 𝙛𝙤𝙡𝙡𝙤𝙬-𝙪𝙥 𝙖𝙩𝙩𝙖𝙘𝙠. 𝗛𝗶𝗿𝗶𝗻𝗴 𝗳𝗿𝗮𝘂𝗱 𝗷𝘂𝘀𝘁 𝗿𝗲𝗮𝗰𝗵𝗲𝗱 𝗮 𝗻𝗲𝘄 𝗹𝗲𝘃𝗲𝗹. North Korean hackers are no longer satisfied with just infiltrating your company—they’re holding your data hostage and demanding ransoms to keep it from being leaked. It’s a sophisticated evolution in cybercrime, and Western companies are the primary target. 𝗛𝗲𝗿𝗲’𝘀 𝗵𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: Hackers pose as highly qualified IT professionals, using fake resumes, AI-generated identities, and stolen credentials. They go through the hiring process unnoticed, secure a job, and gain access to sensitive company data. But instead of just stealing it, they’re now threatening to expose it—unless you pay up. 𝗦𝗼, 𝘄𝗵𝗮𝘁 𝗰𝗮𝗻 𝘆𝗼𝘂 𝗱𝗼 𝘁𝗼 𝗽𝗿𝗲𝘃𝗲𝗻𝘁 𝘁𝗵𝗶𝘀? 1. 𝗧𝗶𝗴𝗵𝘁𝗲𝗻 𝗬𝗼𝘂𝗿 𝗛𝗶𝗿𝗶𝗻𝗴 𝗣𝗿𝗼𝗰𝗲𝘀𝘀 Use multi-layered identity verification tools and require video interviews with real-time identity checks. Look for red flags like unverified recruiters or unusual interview behaviors (e.g., candidates refusing to turn on their camera). 2. 𝗦𝗰𝗿𝗲𝗲𝗻 𝗝𝗼𝗯 𝗢𝗳𝗳𝗲𝗿𝘀 𝗖𝗮𝗿𝗲𝗳𝘂𝗹𝗹𝘆 Whether you’re a hiring manager or candidate, scrutinize job application invites and offers, especially those from email or messaging services like WhatsApp. Verify the recruiter’s identity and check if the company they represent is legitimate. 3. 𝗠𝗼𝗻𝗶𝘁𝗼𝗿 𝗡𝗲𝘄 𝗛𝗶𝗿𝗲𝘀’ 𝗕𝗲𝗵𝗮𝘃𝗶𝗼𝗿 Even after onboarding, monitor new employees for suspicious activity, such as unexpected access requests or attempts to install unauthorized software. Keep access levels restricted for new hires until they’ve been fully vetted. 4. 𝗨𝘁𝗶𝗹𝗶𝘇𝗲 𝗦𝘂𝘀𝗽𝗶𝗰𝗶𝗼𝘂𝘀 𝗘𝗺𝗮𝗶𝗹 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝗧𝗼𝗼𝗹𝘀 Before clicking on links or opening attachments in unsolicited job offers or other suspicious emails, make use of tools like Field Effect’s Suspicious Email Analysis Service (SEAS) to ensure they’re benign. The rise in this type of extortion shows just how advanced cybercriminals are becoming. Protecting your business goes beyond cybersecurity—it’s about reinforcing every layer, 𝗶𝗻𝗰𝗹𝘂𝗱𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗵𝗶𝗿𝗶𝗻𝗴 𝗽𝗿𝗼𝗰𝗲𝘀𝘀. 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: The next IT hire you make could be a undercover cybercriminal, but you can minimize the risk by staying vigilant, verifying identities, and implementing strict access controls. Intelligent Technical Solutions Mike Rhea #Cybersecurity #HiringFraud #DataExtortion #HRSecurity #RiskManagement #BusinessProtection #EndpointSecurity #ITSecurity #RemoteWork #Leadership #CyberRisk #RiskMitigation #BusinessLeaders #HR
-
Imagine this. You’re staring at a dispute queue flooded with claims. One is from a single mother, another is from a fraudster who’s filed 15 fake disputes this year on similar transactions—across 3 banks. They look exactly the same to the reviewer with the data they have available today. Until now, there was no way to instantly know the difference. So banks and fintechs spent billions trying to guess intent—and still lost over $100B to first-party fraud last year. After 2+ years of research, development and consortium building, this ends today as we finished building a solution that completely flips the script: Socure’s new Dispute Abuse Score. It’s the first and only model purpose-built to predict first-party fraud (like Reg E abuse) before or after a transaction, powered by Socure’s proprietary Identity Graph and consortium—the most advanced identity and risk intelligence network on the planet. Because Socure sits at the center of 10s of billions of identity and risk decisions for the modern enterprise, we uniquely see the full picture. We don’t just analyze a dispute—we recognize the consumer behind it, even across different institutions, even when their identities are purposefully manipulated. What looks like an isolated case to one bank or merchant is actually exposed by Socure as a serial abuser across the network. Built on 350M+ verified identities and 30B+ transactions across our First-Party Fraud Consortium, this score turns dispute teams into decisioning powerhouses: 🔹 Instantly auto-approve low-risk claims 🔹 Flag and stop high-risk repeat abusers in real time 🔹 Require verification (DocV, OTP, employment, bank statements, etc.) only when needed 🔹 Cut investigation backlogs and stop fraud losses before they happen 🔹 Continuously optimize by feeding outcomes back into our models This is how you fight fraud without punishing good customers. This is operational efficiency realized. This is the unique power of Socure having one persistent identity, on top of one global graph, delivered via one AI RiskOS platform. This is Socure. The place for the most accurate and identity and risk decisions. And we’re just getting started. Stay tuned for more as we continue to innovate and ship at an accelerating rate. (Link to full blog post in the comments)
-
One phone call could have prevented a multinational firm from losing $25.6M. Here's the story – it involves an AI tool anybody with an internet connection and a few bucks can use⤵ An employee of the business got an email, supposedly from the organization's CFO, asking him to pay out $25.6M. They were initially suspicious that it was a phishing email… but sent the money after confirming on a Zoom call with the CFO and other colleagues he knew. The twist: Every other person on the Zoom was a Deepfake generated by scammers. It might sound like a crazy story. But it's one we're going to hear more – As long as cybersecurity practices lag behind publicly available AI. A premium subscription to Deepfakes Web costs $19/month. And the material scammers use to pull hoaxes like this is free – 62% of the world's population uses social media, which is full of: ✔️ Your voice ✔️ Your image ✔️ Videos of you But if that sounds apocalyptically scary, there's no need to panic – Two straightforward cybersecurity practices could have prevented this easily: 1. Monthly training Anyone who can control the movement of money in or out of accounts needs to be trained *monthly* on how to follow your security process. Don't just have them review the policy and sign off. Have them physically go through it in front of you. They need to be able to follow it in their sleep. 2. Identity Verification Integrate as many third-party forms of identity verification as you can stand – then double-check them before *and* during money transfers. A couple of ways to do this: → One-time passcode notifications Send an OTP code to the person asking for a money transfer and have them read it to you from their email or authenticator live on the call. → Push notifications Have a security administrator ask them to verify their identity via push notification. I can't guarantee that these 2 steps would've sunk this scam… But the scammers would have needed: - Access to the work phone of whoever they were impersonating (so the phone *and* its passcode) - The password to that person's authenticator or access to their email - At their fingertips the moment the push notification was sent In short: it's possible, but not probable. It's overwhelming to think we can't trust what's in front of our eyes anymore. But my hope is that stories like this will empower people to level up their cybersecurity game. The best practices that will keep us safe are the same as ever – Educated people and simple, secure processes. But the scams are getting more sophisticated. Make sure you're ready. P.S. Maybe you're wondering: "Is my company too small for me to worry about this stuff?" Answer: If more than one person is involved in receiving and sending funds to anyone for any reason at your company… it’s good to start implementing these security practices now.
-
Is KYC Broken? Here’s the latest...(you need to know) Most companies think KYC is a bulletproof line of defense. The reality, it can be a giant blind spot. Fraudsters have figured out how to bypass identity verification at scale. AI-generated deepfakes, emulators, and app cloners make it easy to create synthetic identities that can pass KYC checks. KYC system's aren’t failing because they are weak, they're failing because they were never built to catch fraud in an AI world. Here’s the exploit: ▪️ Deepfake Technology: AI-generated videos that bypass facial verification. The KYC platform sees a “real” face but its not! ▪️ Device Spoofing: Emulators and cloners create multiple fake devices, masking fraudulent activity and enabling scaled attacks. ▪️ Hooking & Tampering: Fraudsters manipulate verification apps to inject fake data directly into the process. The result? Fraudsters can pass KYC undetected. Fake accounts skyrocket - Payment fraud and chargebacks escalate. Most companies don’t have a good grip on this yet. So what’s the fix? You have to start analyzing devices and behaviors in real time. ✅ Device intelligence: Identify syndicates tied to the same device, accurately. ✅ Behavioral analysis: Detect session anomalies in real-time before fraudsters can cash out. ✅ Continuous monitoring: Fraud doesn’t stop at onboarding or only happen at payment - think "anytime fraud" and monitor accordingly. Fraudsters know KYC is just a checkpoint. They know what you are checking for and how to fool the process. What do you think #fraudfighters?