"AI will replace fraud analysts" is the wrong conversation. Every fraud leader I talk to knows this. But they're still asking: "What can I actually do with AI today that won't freak out my team?" And the pressure is real. Here's what I'm hearing: • Boards want "AI strategy" yesterday • Teams fear being replaced • Leaders stuck in the middle • Everyone pretending they have it figured out Let's be honest... Nobody has this figured out yet. But the smartest fraud leaders I'm talking to share one approach: Small. Specific. Human-in-the-loop. That's it. That's the entire strategy that's actually working. Opportunity 1: Start with investigation summaries Don't automate decisions. Automate documentation. • Feed transaction details into your tool • Generate investigation summaries • Save 2 hours per analyst per day One team reduced case notes from 20 minutes to 2 minutes. That's 18 minutes back to catch actual fraud. Opportunity 2: Pattern detection assistant Not replacing analysis. Augmenting it. • Upload daily fraud cases • Ask: "What patterns do you see?" • Use AI to spot trends humans might miss One team found 3 new fraud patterns their rules missed. Opportunity 3: Rule writing helper The most underrated AI use case. • Describe the fraud pattern in plain English • AI drafts the rule logic • Human reviews, tests, deploys What took 3 hours now takes 30 minutes. Stop thinking: AI vs. Humans Start thinking: AI + Humans vs. Fraudsters Your people know fraud. AI knows patterns. Together, they're stronger.
AI Algorithms For Fraud Detection
Explore top LinkedIn content from expert professionals.
-
-
A recent TechCrunch article stuck out to me: "GenAI could make KYC effectively useless" This is something I've been vocal about – the rise of deepfakes and their implications for fraud prevention. Many companies, including financial institutions and marketplaces, rely on document scanning and facial recognition for identity verification. But here's the hard truth: creating fake documents is incredibly easy, and GenAI makes it even easier for fraudsters. The bigger concern? Facial recognition can be easily duped. Our faces, often publicly available on social media and various websites, can be used by fraudsters to create masks and bypass facial recognition software. Even liveness detection isn't foolproof anymore. GenAI has become sophisticated enough to bypass both facial recognition and liveness tests. Relying on public information for identity verification is no longer effective. Sure, it might check the compliance box 🤷🏻♂️ But it's not stopping fraud. The same goes for PII verification. With the sheer number of data breaches, much of this data is effectively public. Document verification, facial recognition, PII verification – all these methods are vulnerable in the age of GenAI. This isn't just a temporary challenge; it's the future of fraud prevention. So, if your company is using these traditional methods for KYC and IDV, it's time to rethink your strategy. At Incognia, we're ahead of the curve, developing solutions that address these evolving challenges.
-
𝐇𝐨𝐰 𝐀𝐈 𝐦𝐢𝐭𝐢𝐠𝐚𝐭𝐞𝐬 𝐟𝐫𝐚𝐮𝐝 𝐢𝐧 𝐀𝐜𝐜𝐨𝐮𝐧𝐭-𝐭𝐨-𝐀𝐜𝐜𝐨𝐮𝐧𝐭 𝐏𝐚𝐲𝐦𝐞𝐧𝐭𝐬 by Visa👇 — 𝐓𝐡𝐞 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐢𝐧 𝐀2𝐀 𝐏𝐚𝐲𝐦𝐞𝐧𝐭𝐬: ► Account-to-Account (A2A) payments are rapidly growing, with a forecasted 161% growth between 2024 and 2028. ► The fundamental characteristics of Real-Time Payments (RTP), such as speed, 24/7 availability, irrevocability, and lack of network visibility, contribute to the increasing fraud risks. ► Fraud is evolving with the growth of A2A payments, making it crucial for financial institutions to implement real-time fraud prevention strategies. — 𝐖𝐡𝐲 𝐢𝐬 𝐀𝐈 𝐂𝐫𝐢𝐭𝐢𝐜𝐚𝐥𝐥𝐲 𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐢𝐧 𝐅𝐫𝐚𝐮𝐝 𝐏𝐫𝐞𝐯𝐞𝐧𝐭𝐢𝐨𝐧? ► 𝐒𝐩𝐞𝐞𝐝 𝐚𝐧𝐝 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: AI enables real-time fraud detection and prevention, essential for instant payment transactions that are completed within 10 seconds. ► 𝐏𝐚𝐭𝐭𝐞𝐫𝐧 𝐑𝐞𝐜𝐨𝐠𝐧𝐢𝐭𝐢𝐨𝐧: AI can recognize patterns and detect irregularities, linked to mule accounts or changed geolocation. ► 𝐀𝐝𝐚𝐩𝐭𝐢𝐯𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠: AI models adjust to new fraud trends in real-time, unlike traditional rules-based systems that require post-loss analysis. ► 𝐑𝐞𝐝𝐮𝐜𝐞𝐝 𝐅𝐚𝐥𝐬𝐞 𝐏𝐨𝐬𝐢𝐭𝐢𝐯𝐞𝐬: AI-enhanced systems provide more accurate fraud detection, reducing the need for manual reviews and minimizing false positives. ► 𝐍𝐞𝐭𝐰𝐨𝐫𝐤-𝐋𝐞𝐯𝐞𝐥 𝐕𝐢𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐲: AI leverages a multi-financial institution (FI) view, enabling a comprehensive view of fraud across payment networks, which is crucial for detecting cross-network fraud schemes. — 𝐑𝐮𝐥𝐞𝐬-𝐁𝐚𝐬𝐞𝐝 vs. 𝐀𝐈-𝐄𝐧𝐡𝐚𝐧𝐜𝐞𝐝 𝐒𝐲𝐬𝐭𝐞𝐦𝐬: 𝐑𝐮𝐥𝐞𝐬-𝐁𝐚𝐬𝐞𝐝 𝐒𝐲𝐬𝐭𝐞𝐦: 1️⃣ Transaction Initiated 2️⃣ Massive Volume of Transactions: High volume of transactions are flagged for manual review due to basic rule triggers. 3️⃣ Manual Review: Transactions are manually reviewed, leading to delays and operational inefficiencies. 4️⃣ Transaction Assessed: Risk is evaluated based on pre-set rules. 5️⃣ Transaction Authorized: If no rule is violated, the payment is authorized. 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬: High false positives, time-consuming manual reviews, and delays in payment processing. 🆚 𝐀𝐈-𝐄𝐧𝐡𝐚𝐧𝐜𝐞𝐝 𝐒𝐲𝐬𝐭𝐞𝐦: 1️⃣ Transaction Initiated 2️⃣ Curated Volume of Transactions: AI intelligently filters transactions, reducing the volume that requires review. 3️⃣ AI-Assisted Review: Transactions are reviewed with AI input, providing real-time risk assessment. 4️⃣ Data & Model Assessment: AI evaluates transactions using data patterns and predictive models. 5️⃣ Transaction Authorized: If deemed low-risk, the payment is instantly authorized. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬: Reduced false positives, real-time risk assessment, operational efficiency, and improved customer experience. — Source: Visa — ► Sign up to 𝐓𝐡𝐞 𝐏𝐚𝐲𝐦𝐞𝐧𝐭𝐬 𝐁𝐫𝐞𝐰𝐬 ☕: https://lnkd.in/g5cDhnjC ► Connecting the dots in payments... and Marcel van Oost
-
Mastercard's recent integration of GenAI into its Fraud platform, Decision Intelligence Pro, has caught my attention. The results are impressive and shows the potential of “GenAI in Advanced Business Applications”. As someone who follows AI advancements in Fraud across the FSI industry, this news is genuinely exciting. The transformative capabilities of GenAI in fortifying consumer protection against evolving financial fraud threats showcase the potential impact of this integration for improving the robustness of AI models detecting fraud. The financial services sector faces an escalating threat from fraud, including evolving cyber threats that pose significant challenges. A recent study by Juniper Research forecasts global cumulative merchant losses exceeding $343 billion due to online payment fraud between 2023 and 2027. Mastercard's groundbreaking approach to fraud prevention with GenAI integrated Decision Intelligence Pro is revolutionary. - Processing a staggering 143 billion transactions annually, DI Pro conducts real-time scrutiny of an unprecedented one trillion data points, enabling rapid fraud detection in just 50 milliseconds. - This innovation results in an average 20% increase in fraud detection rates, reaching up to 300% improvement in specific instances. As we consider strategic imperatives for AI advancement in fraud, this news suggests what future AI models must prioritize: - Rapid analysis of vast datasets in real-time, maintain agility to counter emerging fraudulent tactics effectively, and assess relationships between entities in a transaction. - By adopting a proactive approach, AI systems should anticipate and deflect potential fraudulent events, evolving and learning from emerging threats to bolster security. - Addressing the challenge of false positives by evolving AI models capable of accurately distinguishing legitimate transactions from fraudulent ones is vital to enhancing overall security accuracy. - Committing to continuous innovation embracing AI is essential to maintaining a secure and trustworthy financial ecosystem. #artificialintelligence #technology #innovation
-
Too many fraud solutions focus just on account opening. But risk evolves across the full user journey. Here's how we build the full picture at Sardine for dynamic scoring 👇 👉 When a user signs up, we create a baseline score based on identity, device, email, behavior signals 👉 As they transact, we update the score dynamically based on activity like login patterns, transaction details, behavior changes 👉 We build a holistic profile combining telco, email, device, merchant and more data into their risk score 👉 Machine learning models continuously monitor and flag anomalies to the baseline 👉 Granular data + models train on user's unique activity = precise risk scoring as they grow with your product Unlike legacy fraud tools, we don't just screen applicants. We provide ongoing monitoring across onboarding, transactions, account changes and more. This full picture reduces false positives and keeps fraud low across the user lifecycle.
-
AMLintelligence.com is keeping me up-to-date on the Bunq and De Nederlandsche Bank ping-pong match - €2.6MM penalty. After reading through the translated complaint (74 pages)... IMO, there are two options - 1) Bunq used an AI model that was not properly trained. Meaning, NVIDIA touted a model that learns 100x faster, but trained by....anyone?? Bueller?? 💥AI is only as good as the human that trains it 💥, especially when it comes to AML/sanctions. Other options is: 2) Bunq does not have hyper-suspicious AML analysts dispositioning alerts, which in turn, makes the AI model dumber and also leaves them open to regulatory criticism/penalties. The pdf below highlights some of the issues throughout, but mainly the failures can be rolled up into the following: 1) Alerts did not find the discrepancies that were obviously noted between the customer-facing questions/email and the actual activity. 2) Alert dispositions did not address the discrepancies that were in the file and that were actually happening (similar to # 1) 3) No OSINT information about the customer - especially not enough to justify the amount of money flowing through. This is literally in all of my training - if no one can find you online, how can you generate so much in revenue?? 4) Found throughout the complaint - 📢 regurgitation of the txn activity IS NOT A PROPER ALERT DISPOSITION. That is what a lot of AI models do - we don't need regurgitation of activity in sentence form. We need critical thinking, hyper-suspicious mindsets assuming the worst and proving themselves right (or wrong). 5) Alerts were not generated based off of generic descriptions of payments (that seemed to be on repeat). 6) Pgs. 19-20, 24-25, 42, 50, 70 - it is apparent that these alert dispositions were not written by humans, but by a template that some person, some where who has never actually done a proper AML investigation thought was "good". 7) Using templates that use LLM/AI models that have not been trained is not a good move - see ex. on pg. 26 - "A large portion of the comments bunq makes regarding the alerts are difficult to reconcile with the actual transactions. For example, several transactions indicate that "The user uses the account for regular business expenses (e.g., telecom costs, office costs, salaries, etc.)," while the transaction description states "PREPAYMENT FOR FLOWERS." To quote Dumb & Dumber - "Samsonite! I knew I was way off!" 🙄 8) There are over 30 "risk indicators" outlined by DNB that should be operationalized for all Dutch banks and fintechs. They are literally providing a road map to avoid penalties. Just look at the highlights. I think this will play into whether or not Banq gets to continue to use their AI to fight fincrime. (this was a court case they won in 2022) #ifollowdirtymoney
-
Are fraudsters smarter than #FraudFighters? -- It certainly seems like that sometimes, but having spent years working in big banks, processors, and merchants, I understand firsthand how they can be bogged down by bureaucracy and red tape for the smallest of changes needed to react to quickly changing trends. While this story is about a criminal who used thousands of fraudulent identities to create accounts with gig economy companies, it also delves into (yes, I used "delve." No, this post wasn't written by ChatGPT, Jordan) why she did it - tackling themes of immigration and the ingenuity of those harmed by a broken system. This is not a political post, don't worry. While fraud fighters hate when our companies experience loss from fraudsters, sometimes there's... I hesitate to say this, but an appreciation of the cleverness of their methods. This woman exploited gaps in Documentary KYC, SSN verification, and device detection to create her own fraud empire. Fraud technology has improved significantly over the past 5 years (in large part, it was forced to by COVID), but companies spend millions on system upgrades and new vendors and can still fail. But why? - KYC checks are being bypassed by GenAI videos, images, and IDs - SSN Verification can be expensive and isn't available for most merchants - Device ID at checkout isn't enough any more Just as the woman in this article evolved her methods in response to new challenges, WE should be evolving what we collect, when we collect it, and how we assess it - not just at a single point in time, but across the customer journey. - Is the user spoofing a video with a virtual camera? (Synthetic Fraud) - Is the device stationary or at an unnatural angle for normal interaction? (Device Farms) - Is the user copying and pasting information like address or SSN? (more Synthetic Fraud, mules, ID theft) - Is the user in an active phone call or have remote access software running on their device? (Scam Victims) If your answer to these questions is "I don't know," I'd recommend researching what companies are innovating in this space so the next Priscila that comes along isn't exploiting you. #fraud #scams #fraudtechnology
-
You’re hired as a GRC Analyst at a fast-growing fintech company that just integrated AI-powered fraud detection. The AI flags transactions as “suspicious,” but customers start complaining that their accounts are being unfairly locked. Regulators begin investigating for potential bias and unfair decision-making. How you would tackle this? 1. Assess AI Bias Risks • Start by reviewing how the AI model makes decisions. Does it disproportionately flag certain demographics or behaviors? • Check historical false positive rates—how often has the AI mistakenly flagged legitimate transactions? • Work with data science teams to audit the training data. Was it diverse and representative, or could it have inherited biases? 2. Ensure Compliance with Regulations • Look at GDPR, CPRA, and the EU AI Act—these all have requirements for fairness, transparency, and explainability in AI models. • Review internal policies to see if the company already has AI ethics guidelines in place. If not, this may be a gap that needs urgent attention. • Prepare for potential regulatory inquiries by documenting how decisions are made and if customers were given clear explanations when their transactions were flagged. 3. Improve AI Transparency & Governance • Require “explainability” features—customers should be able to understand why their transaction was flagged. • Implement human-in-the-loop review for high-risk decisions to prevent automatic account freezes. • Set up regular fairness audits on the AI system to monitor its impact and make necessary adjustments. AI can improve security, but without proper governance, it can create more problems than it solves. If you’re working towards #GRC, understanding AI-related risks will make you stand out.
-
The real breakthrough of AI in auditing is not automation - it’s autonomy. For years, AI in audit has been used for task-based automation: scanning reports, reconciling transactions, and highlighting discrepancies. But Agentic AI takes it a step further. It doesn’t just flag issues - it investigates them, cross-referencing internal data with external risk factors to assess intent and likelihood of fraud. Today’s AI: Identifies anomalies, flags suspicious transactions, and requires human oversight. Tomorrow’s AI: Assesses fraud probability, suggests corrective actions, and autonomously detects new fraud schemes before they surface. Agentic AI doesn’t just say, “This transaction looks off,” but rather: “This pattern suggests an employee is routing funds through a third-party shell company. Here’s supporting evidence, historical comparisons, and recommended next steps.” This is the shift from audit assistant to AI-driven fraud investigator. The future isn’t just AI-powered auditing - it’s AI-led fraud prevention. The financial world is changing. The companies that build AI-first risk management strategies will be the ones that stay ahead. Are we ready to let AI take a more active role in financial integrity, or are we still too reliant on human oversight?
-
I don’t say this lightly. Our new release of the Sigma V4 Fraud Engine is GAME CHANGING for companies losing millions of dollars annually from digital account opening fraud. I’m talking to the banks, fintechs, marketplaces, governments, gaming companies… Pay attention. Here’s the performance data on Sigma Identity V4: 🔹 Capturing up to 99% of identity fraud in the riskiest 5% of users, compared to just 37% by competitors at the same review rate 🔹 Reducing false positives by more than 40% over Socure's Sigma ID v3 🔹 Delivering an average 20x ROI for customer's from increased revenue/false positive reduction, fraud loss reduction, and lower manual reviews How did we do it? 10 years of making huge investments across 3 key areas: 1️⃣ Digital Signal creates a robust digital fingerprint of each customer, inclusive of devices and their OS, browser languages, geolocations, and relationship to multiple identities. 2️⃣ Entity Profiler allows us to see an identity from its inception in the digital economy, assessing every historical transactional, digital and relational data point to make up-to-the-second risk decisions. 3️⃣ Integrated Anomaly Detection is a new model that assesses identity behavioral pattern differences at the company, industry, and financial network level and allows us to identify thousands of risk-indicating variables. Let’s use an analogy. Think of fighting identity fraud like playing a giant game of 'Spot the Difference' where most of the images are identical copies of a normal, everyday scene. The fraudulent activity is like one subtle, but crucial difference hidden in one of these images. It's hard to find because it blends in so well. However, with the right tools, this one different detail lights up or gets highlighted, making it easy to spot. This saves the fraud analysts, who are like players in this game, a lot of time and effort as they don't have to scrutinize every single part of the picture to find the anomaly #fraud #ai #banks #fintech