Fraud Detection Methods

Explore top LinkedIn content from expert professionals.

  • View profile for Rohit Tamma

    Here To Write About Cyber Attacks & Trends in Plain Language | Enterprise Security Operations @ Google

    19,907 followers

    "How Attackers Are Using Password-Protected Files To Bypass Detection and How to Stop Them?" In war, using enemy's weapon against them is a powerful tactic! Cyber attackers apply this meticulously: Using the same defenses meant to protect us to their advantage. It's like turning our shields into their secret weapon. In today's post, "password protected file" is that weapon. Password-protected files are intended to share files securely with others. They can be documents, PDFs, ZIP files etc. They simply prompt for a password when opened. But attackers intelligently use it as an attack vector to bypass detection. Let's see how... 𝗔𝘁𝘁𝗮𝗰𝗸 𝗙𝗹𝗼𝘄: 1) Attacker creates & sends a password protected malware file as an email attachment. 2) Security tools can't analyze them as automated scanning fails (since file is password locked). 3) Victim opens the file that's disguised as legit doc (often as invoice). 4) Victim assumes that since its sensitive file it might have been password protected. Notices the password mentioned in the same email body. Enters it. 5) Victim now opens the files inside > Ransomware or malware gets executed on the device. Thus, attackers bypass the email/network gateway security and reach the device very cunningly. Instead of an attachment, a common trend these days is to use password protected Dropbox or Google Drive file link to achieve the same. 𝗛𝗼𝘄 𝗰𝗮𝗻 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗔𝗻𝗮𝗹𝘆𝘀𝘁𝘀 𝗰𝗼𝘂𝗻𝘁𝗲𝗿 𝘁𝗵𝗶𝘀? --> Depending on your company requirements, consider blocking or quarantining emails with password-protected attachments. (With the current enterprise secure sharing options available, users should not be relying on password protected files anyway). --> A few email security vendors do support scanning of password protected files if the password is present in the mail body. Turn on these features for SOC team's visibility. --> To tackle these attacks, evaluate what dynamic preventative security controls at web browser and end point level are present. i.e. what controls do you have if the file redirects the user to a malicious site or attempts to install malware? --> Educate the users about these scenarios. Tell them that password protected files are suspicious. Tell them that if the password is listed in the same email, it's even more suspicious. If you enjoyed this or learned something, follow me at Rohit Tamma for more in future! #ransomware #incidentresponse #cybersecurity #informationsecurity #cyberattack #threatdetection

  • View profile for Dan Oshinsky

    Growing loyal audiences and driving revenue via newsletters • Working with newsrooms, non-profits, and indie writers • Want more out of your newsletter strategy? Let’s chat.

    8,609 followers

    I remember the first time a spambot attacked one of the sign-up pages at BuzzFeed. At first, we didn’t realize what was happening. We were looking at our email lists and saw that a ton of new subscribers were signing up for our newsletters that day — exciting! But then we looked a little closer. Almost all of the subscribers were from the same domain, yahoo.co.uk, which seemed odd. And then we looked even closer: The sign-ups were coming in so quickly — dozens of new yahoo.co.uk emails every minute — there was no way the email addresses were submitted by actual humans. That’s when we realized that something was seriously wrong. But we didn’t realize how much trouble we were in. We were the victims of a spambot, which had been crawling the web looking for a form like ours. These bots are usually looking for forums with a comment section where they can drop in a link to a page where someone can buy something, like pharmaceutical drugs. These bots don’t always realize that they’ve found a newsletter sign-up form — not a comment section. And if lots of bots end up on your list, it can cause serious deliverability issues. So what can you do about them? 1.) You can use a third-party tool to verify email addresses, like Kickbox, before adding them to your list. 2) You can use CAPTCHA, like we eventually did at BuzzFeed, to shut down bot activity on key forms. 3) You can set up a honeypot — a hidden field only a bot can see, and suppress any email address that fills out that field. 4) You can use double opt-in to require an extra confirmation before being added to the list. Your strategy might even involve multiple steps — many teams use CAPTCHA and double opt-in, for instance. Every newsletter should have a game plan for keeping their list clean. I’ve got more ideas here (https://lnkd.in/g89f2553) about how to build out the right strategy for your newsletter. ––– 📷 Below is a screenshot of the BuzzFeed newsletter page. There’s the CAPTCHA logo in the bottom right corner — three overlapping arrows of different colors — that indicates that the form is being secured by CAPTCHA.

  • View profile for David Hartstein

    Making nonprofit websites easier | Co-Founder @ Wired Impact | Neurodiversity advocate | Social good nerd

    2,258 followers

    Spammy form submissions spiked to 4,530 PER DAY for our nonprofit clients. Here’s how we cut it by 99.8%, giving them back some time (and sanity). We’ve always had anti-spam tools in place. But the bots were getting better at slipping through the cracks. So we added two new fields to website forms. Both are hidden from visitors, so they don’t impact the form submission process. 1. Honeypot A honeypot is a hidden field that’s designed solely to bait spam bots into filling it out. Since visitors can’t see it, they’ll never complete it. That way, if this custom honeypot is filled out, we know it was a bot and we can flag it as spam. 2. Time Trap This field checks how much time passed from when the form was loaded to when it was submitted. Spam bots are fast. Humans aren’t. If a form’s submitted in under two seconds, it’s probably not from a real person. If someone somehow does trigger this system, they’ll see a message telling them they were flagged as spam. When a submission makes it through these first two checks, it gets routed into the anti-spam systems we previously had in place to make sure it’s clean before hitting our clients’ inboxes. Spam wasn’t impacting all of our clients equally. But some were getting hit in waves. Thanks to the technical wizardry of the one and only Jonathan Goldford, we're down from 4,530 spammy messages per day to a much more manageable 11! Which means more time for nonprofits to focus on work that moves their mission forward.

  • View profile for John George

    Hacking Voice AI 👨💻

    2,226 followers

    After seeing multiple devs lose hundreds to voice AI form spam, here's a breakdown of effective mitigations by use case: For authenticated users: ‣ Rate-limit per session and implement temporary account suspension for abuse ‣ Escalate unusual activity patterns to manual review/support contact ‣ This works because you have persistent identity to enforce consequences For anonymous public forms (the harder problem): ‣ Use systems that generate single-use verification tokens confirming human interaction ‣ Modern reCAPTCHA operates invisibly across pages, analysing comprehensive behavioural profiles: mouse trajectories, keystroke timing, Canvas/WebGL fingerprints, scrolling patterns, device characteristics, and Google account signals ‣ When it determines you're human, it issues a time-limited verification token (valid for 2 minutes, single use only) ‣ Your API validates this token server-side with Google before processing the request ‣ This creates per-request proof-of-humanity without requiring traditional session management Universal protections: ‣ Hard spending caps and call duration limits ‣ IP-based rate limiting and geographic restrictions by country/area code ‣ Integration with fraud detection services Advanced verification: ‣ SMS confirmation to validate phone ownership before calling ‣ ⚠️ Critical: This creates SMS bombing attack vectors: apply rate limiting and CAPTCHA protection to SMS endpoints too The fundamental vulnerability: Many voice AI implementations expose API credentials directly in browser dev tools. It makes ALL other protections worthless since attackers can bypass your frontend entirely and call APIs directly. The endpoint that triggers the call is the one that must be protected. The uncomfortable truth: perfect security for truly open services doesn't exist. You can only make abuse expensive and annoying enough to deter most attackers. #VoiceAI #WebSecurity #BotProtection

  • View profile for Mackenzie Jackson

    Developer and Security Advocate @ Aikido Security

    17,609 followers

    𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝘂𝘀𝗲𝗱 𝘁𝗼 𝗯𝗲 “𝗶𝗳 𝘁𝗵𝗶𝘀 𝘁𝗵𝗲𝗻 𝘁𝗵𝗮𝘁.” 𝗡𝗼𝘄 𝗶𝘁’𝘀 “𝗺𝗮𝘆𝗯𝗲 𝘁𝗵𝗶𝘀, 𝗽𝗼𝘀𝘀𝗶𝗯𝗹𝘆 𝘁𝗵𝗮𝘁, 𝗴𝗼𝗼𝗱 𝗹𝘂𝗰𝗸.” 🥲 One of the biggest takeaways from my conversation with Steve Giguere is the rapid move from 𝘥𝘦𝘵𝘦𝘳𝘮𝘪𝘯𝘪𝘴𝘵𝘪𝘤 𝘵𝘰 𝘯𝘰𝘯-𝘥𝘦𝘵𝘦𝘳𝘮𝘪𝘯𝘪𝘴𝘵𝘪𝘤 𝘴𝘦𝘤𝘶𝘳𝘪𝘵𝘺. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝗱𝗼𝗲𝘀 𝘁𝗵𝗮𝘁 𝗲𝘃𝗲𝗻 𝗺𝗲𝗮𝗻? Deterministic security is how most tools have always worked. You write a set of rules. If a rule fails, you have an incident. 𝘐𝘵’𝘴 𝘢 𝘤𝘭𝘦𝘢𝘳 𝘱𝘢𝘴𝘴 𝘰𝘳 𝘧𝘢𝘪𝘭 𝘴𝘺𝘴𝘵𝘦𝘮. Sure, there’s nuance, but that model has dominated since the first SAST rules were carved in stone. 𝗘𝗻𝘁𝗲𝗿 𝗔𝗜. 𝗦𝘂𝗱𝗱𝗲𝗻𝗹𝘆 𝘄𝗲 𝗮𝗿𝗲 𝗶𝗻 𝗮 𝗻𝗼𝗻-𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗶𝘀𝘁𝗶𝗰 𝘄𝗼𝗿𝗹𝗱. Applications are being powered by models where users interact with data, services, even your beloved MCP servers (buzzword unlocked ✅) using natural language. And natural language is messy. Inputs and outputs are unpredictable. Try writing a neat little ruleset for that. The game changes here. Old-school rule-based security is still relevant and necessary, but against AI-native apps it will lose most of its teeth. Still not convinced? Look at the endless cat-and-mouse battle over prompt injection. 𝘜𝘴𝘦𝘳 𝘧𝘪𝘯𝘥𝘴 𝘢 𝘱𝘳𝘰𝘮𝘱𝘵 𝘪𝘯𝘫𝘦𝘤𝘵𝘪𝘰𝘯 𝘍𝘳𝘰𝘯𝘵𝘪𝘦𝘳 𝘮𝘰𝘥𝘦𝘭 𝘣𝘭𝘰𝘤𝘬𝘴 𝘪𝘵 𝘜𝘴𝘦𝘳 𝘧𝘪𝘯𝘥𝘴 𝘢𝘯𝘰𝘵𝘩𝘦𝘳 𝘰𝘯𝘦 𝘍𝘳𝘰𝘯𝘵𝘪𝘦𝘳 𝘣𝘭𝘰𝘤𝘬𝘴 𝘢𝘨𝘢𝘪𝘯 𝘙𝘦𝘱𝘦𝘢𝘵 𝘧𝘰𝘳𝘦𝘷𝘦𝘳 Check out my full conversation with Steve on the future of security and what we can do about it. Link in comments 👇 Pro tip... apparently using an LLM to detect malicious inputs is not a viable solution.... Lakera #AIsecurity #applicationSecurity #CyberSecurity #genertiveAI #FutureOfSecurity

  • View profile for Michel Coene

    Partner, DFIR, Threat Hunting & Threat Intelligence at NVISO

    3,222 followers

    Phishing emails are increasingly using SVG (Scalable Vector Graphics) attachments to avoid detection by security software. SVG files can display graphics, HTML, and execute JavaScript, making them useful for phishing attacks. These attachments are often used to present phishing forms or disguise as official documents, tricking users into downloading malware. MalwareHunterTeam has reported a rise in the use of SVG files in phishing campaigns. Due to their textual nature (XML), SVG files often bypass security detection tools. Since SVG attachments are rare in legitimate emails, they should be treated cautiously unless expected. This screenshot displays an altered SVG phishing sample (altered by NVISO) showing a "no-reply" Wikipedia email address. When a victim receives this SVG attachment, it includes their own email address. Upon opening, the SVG mimics a blurred Excel spreadsheet, with a green phishing form overlaid on top. The Wikipedia logo is fetched via a legitimate Clearbit logo service (through an HTTPS request to logo[.]clearbit[.]com, which can be detected). This entices the victim to enter their credentials to see the full spreadsheet. When the victim enters their password and clicks the "View Document" button, the credentials are sent to an attacker-controlled web server. #phishing #security #detection #awareness

  • View profile for Fakhreddin Mirzoiev

    Agentic AI engineer, data engineer, group data product manager, generative AI, agentic AI, machine learning engineer, researcher, prompt engineer.

    2,230 followers

    Why would you want to use ML model instead of rule based solution? Using a Machine Learning (ML) model instead of a rule-based solution makes sense when the problem space is complex, dynamic, or difficult to define with explicit rules. Here’s a breakdown of when and why ML is preferable: 🔹 1. Data Complexity & Patterns Rule-based systems rely on manually defined logic (IF-THEN rules). ML models learn patterns from data, even if those patterns are nonlinear, subtle, or high-dimensional. ✅ Use ML when: You can’t easily define the logic (e.g., spam detection, image recognition). Rules would be too numerous or fragile. 🔹 2. Adaptability Rule-based systems are static—they don’t evolve unless a human updates them. ML models can adapt to new data through retraining. ✅ Use ML when: The environment or data changes over time (e.g., recommendation systems, fraud detection). 🔹 3. Scalability As complexity grows, rule-based logic becomes unmanageable and hard to maintain. ML can scale to large datasets and generalize beyond seen examples. ✅ Use ML when: You’re dealing with millions of inputs, such as user behaviors, product reviews, or medical records. 🔹 4. Uncertainty and Probabilistic Output Rule-based systems give binary decisions. ML provides probabilities and confidence scores, allowing softer, nuanced decisions. ✅ Use ML when: You want ranked results (e.g., search engines) or need to quantify uncertainty. 🔹 5. Cost of Manual Rule Writing Rules need domain experts and constant updates. ML can reduce the need for expert-crafted logic by letting data speak. ✅ Use ML when: Manually encoding domain knowledge is too time-consuming or expensive. 🔸 When Rule-Based Might Still Be Better Simple, well-defined logic with low variability. High interpretability is required (e.g., legal compliance). Lack of sufficient labeled data to train an ML model. 🔹 Real-World Example TaskRule-BasedML-BasedSpam detection"If subject contains 'free money'"Trained on patterns in email content, sender, behaviorLoan approvalHard-coded thresholdsPredictive model using hundreds of featuresFace recognitionNearly impossible to hand-codeCNN trained on face images

  • View profile for YUVRAJ BADGOTI

    Threat Researcher at izoologic

    34,139 followers

    🚨 Phishing Alert: Threat Actors are Getting Creative with SVG Attachments! 🚨 Cybercriminals are constantly innovating to bypass detection, and their latest trick? Using SVG (Scalable Vector Graphics) attachments in phishing emails. Here's why this matters: 🌐 Traditional Images (JPG/PNG): These are pixel-based grids, easy to scan for malicious content. 🖌️ SVG Files: Instead of pixels, they use lines, shapes, and text defined by mathematical formulas. This makes them lightweight, scalable, and harder for traditional email filters to analyze effectively. ⚠️ How They're Exploited: 1️⃣ Embedding phishing forms directly in SVG files. 2️⃣ Using SVGs to deliver malicious payloads while avoiding detection. 🔒 What Can You Do? ✅ Be cautious of unexpected email attachments, especially SVG files. ✅ Train employees on identifying phishing attempts. ✅ Deploy advanced email security solutions that analyze SVG file content. Phishing campaigns are evolving, and staying informed is your first line of defense! 💡 Have you encountered this technique? Share your thoughts and let's discuss how to combat it! #CyberSecurity #PhishingAwareness #SVGFiles #ThreatIntel #StaySafeOnline

  • View profile for Aatish M.

    Founder & CEO | Making Data Discovery & Protection Simple for Security Teams (SaaS, Cloud, Gen-AI, On-Prem) | ex-Amazon

    8,522 followers

    Gen AI Just Made Your DLP Even More Useless Most DLP solutions were built for files and emails. But guess what? That’s not where data is leaking anymore. Think about it. 🔹 Employees paste sensitive data into ChatGPT. 🔹 They upload financial reports to AI transcription tools. 🔹 AI-generated content bypasses traditional regexbased detection. Your legacy DLP won’t save you from Gen AI data leaks. So what’s your plan? Here's the actionable insight: Understand the New Threat Landscape → Traditional DLP is outdated. ↳ It's built for files and emails, not AI. → Gen AI tools are where data leaks now. ↳ These tools are becoming part of everyday workflow. Identify Where Your Data Lives → Map out all platforms where sensitive data might be shared. → Think beyond emails and files. ↳ Consider AI tools, chat platforms, and SaaS applications. Incorporate Advanced Detection Techniques → Traditional regex patterns miss AI-generated content. → Use machine learning models for better detection. ↳ They adapt to new data patterns. Choose a Comprehensive Solution → Opt for a DLP solution that covers SaaS, Cloud, and Gen AI. → Ensure it offers realtime protection and remediation. ↳ Look for features like blocking, redaction and alerting. Data security isn’t static. It evolves just like the threats you face. Stay ahead by being proactive, not reactive. What's your next move to safeguard against Gen AI data leaks?

  • View profile for Gerty T.

    Architecting Cloud Security | M365 Expert | Cyber Resilience for Builders & Doers

    2,876 followers

    Too often, organizations invest heavily in firewalls, endpoint security, and threat detection—yet overlook a critical flaw in their environment... Inconsistent mail flow rules. These rules govern how emails move through your system, but without proper oversight, they can quickly turn into a security risk. Common issues we find during audits include: - Overlapping rules that create unnecessary complexity - Whitelisted senders/domains that no longer need access - Unmonitored rule changes that open up security gaps When mail flow rules aren’t properly managed, it’s like leaving the back door open while reinforcing the front. The Business Risk? Inconsistent or outdated mail flow rules expose your organization to: 1. Data breaches via unmonitored email traffic 2. Phishing attacks that slip through poorly configured rules 3. Operational inefficiencies, with IT teams spending valuable time troubleshooting preventable issues A proactive approach is essential 1. Regular audits to eliminate redundancies and reduce exposure. 2. Consolidation of mail flow rules into clear, high-level policies that are manageable and secure. 3. Real-time monitoring through your SIEM to alert you of any unauthorized changes. The payoff? Stronger security, reduced complexity, and better control across your email system. This isn’t just a tech issue—it’s about protecting your business from preventable risks and avoiding costly breaches or compliance failures. When was the last time you audited your mail flow rules? If it’s been a while, now’s the time to reassess before they become a liability.

Explore categories