AdvisorDefense: The Silent Persistence of BEC - When Expelling the Attacker Isn’t the End Business Email Compromise (BEC) remains one of the most devastating cyber threats to organizations worldwide. While many assume that kicking a threat actor out of their systems ends the attack, a recent Invictus Incident Response case proves otherwise. Sometimes, attackers persist even after being expelled. The Attack: A Sophisticated Adversary-in-the-Middle Tactic The attack began with a well-crafted phishing email disguised as a Dropbox invoice notification. The recipient, believing it to be legitimate, clicked the ‘View on Dropbox’ button and landed on a fake Dropbox login page. Here’s where the real trouble started: ✅ Credentials Captured – The victim entered their login details. ✅ MFA Compromised – The attacker also captured an MFA code, allowing them to bypass additional security layers. ✅ Persistence Achieved – With access to the email account, the attacker configured eM Client, a third-party email application, enabling them to maintain control even after passwords were reset. ✅ Forwarding Rules Set Up – To further maintain access, they created email forwarding rules, ensuring they could continue monitoring inbox activity unnoticed. The victim eventually caught on. After 3 weeks, IT stepped in to reset passwords, remove forwarding rules, revoke active sessions, and uninstall eM Client. The attacker was expelled, or so they thought! The Attack Didn’t End There… Days later, the attacker leveraged the victim’s email identity in new ways: 🚨 Created a Dropbox account using the victim’s email to send fraudulent invoices to the victim’s contacts. 🚨 Set up a WeTransfer account with the victim’s details to distribute more malicious emails. 🚨 Continued the scam, exploiting the trust associated with the victim’s email. Key Lessons: BEC Attacks Go Beyond the Inbox 1️⃣ MFA Alone Isn’t Enough – Many assume that MFA stops BEC attacks, but attackers are evolving. Adversary-in-the-middle (AiTM) tactics allow them to steal both credentials and MFA codes in real time. 2️⃣ Expelling an Attacker Doesn't Always Mean the End – Even after revoking access, attackers can reuse stolen identities elsewhere to continue fraud. 3️⃣ Continuous Monitoring – Check for newly created accounts using corporate email domains and implement dark web monitoring to detect compromised credentials. How to Protect Your Organization from BEC Attacks 🔒 Adopt phishing-resistant MFA solutions. 🔒 Use Conditional Access & Impossible Travel Policies to detect anomalous login activity. 🔒 Regularly review third-party email applications connected to business accounts to spot unauthorized apps. 🔒 Enable DMARC to prevent domain spoofing. 🔒 Educate employees on phishing techniques. Attackers Are Persistent — Your Defense Should Be Too! #Cybersecurity #BEC #EmailSecurity #ThreatIntelligence #Microsoft365Security https://lnkd.in/eNZcDd4X
Cybersecurity Incident Response Plans
Explore top LinkedIn content from expert professionals.
-
-
‘A recent Wall Street Journal report has updated that account. The report cites sources “familiar with the matter” in claiming that the number of compromised email accounts is in the hundreds of thousands, and that at least two more high-level officials were among those breached by the cyber espionage campaign: assistant secretary of state for East Asia Daniel Daniel Kritenbrink, and Ambassador to China Nicholas Burns. The cyber espionage campaign began with the Chinese hackers somehow getting their hands on a Microsoft signing key, which was then used to forge authentication tokens to slip into email accounts via Outlook.com and Outlook OWA. At least 25 organizations were thought to be impacted, including an unspecified number of federal agencies. The Commerce and State Departments were confirmed to be hit by the breach… …Microsoft said that a “flaw in code” was what led to the theft of the key that enabled the cyber espionage campaign, but cybersecurity professionals have noted that the attack is also something that can readily be spotted if an included Microsoft logging feature is enabled. The trouble is, that feature is only available at a higher paid tier of its Purview Audit service that not all of the government agencies are subscribed to. This immediately led to government calls to make this feature freely available to all customers. Microsoft and CISA have since agreed to an expansion of the company’s cloud logging capability, making it available to a broader range of customers for free in an initiative that will roll out “over the coming months.”’ https://lnkd.in/gqhs-x8y
-
“Why is AI making some security teams more vulnerable? The answer has nothing to do with code.” Last year, a client asked me to “infuse AI” into their threat detection. Within weeks, alerts tripled—but so did burnout. Analysts grew numb to the noise, missing a real breach buried in automated false positives. The irony? Their shiny AI tool worked perfectly. AI isn’t a cybersecurity savior—it’s a force multiplier for human bias. -> Trained on historical data? It inherits past blindspots (like ignoring novel attack patterns). -> Tuned for speed? It prioritizes loud threats over subtle ones (think ransomware over data exfiltration). The most advanced SOCs now treat AI like a scalpel, not a sledgehammer: augmenting intuition, not replacing it. Gartner’s 2024 report claims 73% of breaches involved AI-driven tools. Dig deeper, and you’ll find 89% of those failures traced back to misconfigured human workflows—not model accuracy. Example: A Fortune 500 firm blocked 100% of phishing emails… while attackers pivoted to API exploits the AI never monitored. Before deploying any AI security tool, ask: “What will my team stop paying attention to?” Then: 1. Map its alerts to your actual risk profile (not vendor hype). 2. Reserve AI for repetitive tasks (log analysis) vs. high-stakes decisions (incident response). 3. Force a weekly “false positive audit” to retrain both models and analysts. AI won’t hack itself. The real vulnerability sits between the keyboard and the chair—but that’s fixable.
-
UK Ministry of Defence Mail Servers Left Critically Exposed for Years: A Catastrophic Security Oversight. While the Defence Secretary hailed yesterday as the “first day of accountability” following a highly publicised email breach, the reality tells a much darker story. The notion that a single email error triggered a global security failure is possibly dangerously misleading. According to research shared today with both the UK Home Office and the UK Ministry of Defence, the UK Ministry of Defence’s critical Mail Exchange (MX) servers—handling all inbound and outbound email traffic—have been catastrophically insecure and exposed since at least February 2021. This scandal is far from an isolated human error. This is systemic negligence of the most basic cybersecurity principles. For over four years, long before the 'rogue email' was sent in 2022, the UK Ministry of Defence communications—potentially including military strategies, intelligence, personnel data, and international correspondence—have been traversing insecure infrastructure, easily intercept able by hostile states or cybercriminals. The UK Ministry of Defence have unsurprisingly suffered several cyber incidents over the last several years. Rather than owning this critical exposure, a single, unnamed individual has seemingly been scapegoated to divert attention from this continued catastrophic institutional failure. This is not accountability; it’s damage control and limitation. Meanwhile, the vulnerable MX servers remain exposed, perpetuating unacceptable risk. The implications are staggering: operational compromise, intelligence leaks, and reputational damage costing the UK taxpayer tens of billions. The real story here and threat isn’t a rogue email—it’s the ongoing failure to secure the nation's most sensitive digital communications.
-
Exploring the post-compromise phase of Business Email Compromise (BEC) attacks - with key focus on email forwarding rules as a persistence mechanism. @kj_ninja25 provides a technical deep-dive into: • Common attacker techniques: Creating rules that filter for keywords like "invoice" and "payment" • Implementation methods: Both manual (Outlook settings) and programmatic (Graph API) • Detection strategies: Leveraging Microsoft Defender XDR alerts and custom KQL queries • Key indicators: Watch for "New-InboxRule" and "Set-InboxRule" operations Practical guidance for defenders without unnecessary alarmism. Worth reading if you're responsible for Microsoft 365 security monitoring.
-
𝗢𝗳𝗳𝗶𝗰𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗖𝗼𝗺𝗽𝘁𝗿𝗼𝗹𝗹𝗲𝗿 𝗼𝗳 𝘁𝗵𝗲 𝗖𝘂𝗿𝗿𝗲𝗻𝗰𝘆 (𝗢𝗖𝗖) suffered a recent cloud email breach, that highlighted critical vulnerabilities in email security and access management that have broader implications for all federally regulated institutions. 𝚂̲𝚞̲𝚖̲𝚖̲𝚊̲𝚛̲𝚢̲ ̲𝚘̲𝚏̲ ̲𝚝̲𝚑̲𝚎̲ ̲𝙾̲𝙲̲𝙲̲ ̲𝙱̲𝚛̲𝚎̲𝚊̲𝚌̲𝚑̲ ̲An attacker gained unauthorized access to a privileged administrative email account within the Microsoft environment. The breach went undetected for 8 months, during which sensitive government communications were silently exfiltrated. More than 150K email messages were compromised, affecting around 100 officials. The incident exposed critical shortcomings in access control enforcement, monitoring, and response protocols. 𝙺̲𝚎̲𝚢̲ ̲𝙵̲𝚊̲𝚒̲𝚕̲𝚞̲𝚛̲𝚎̲𝚜̲ ̲𝙸̲𝚍̲𝚎̲𝚗̲𝚝̲𝚒̲𝚏̲𝚒̲𝚎̲𝚍̲ 1. Overprivileged Access – An administrative account with wide mailbox visibility was compromised, facilitating prolonged data exfiltration. 2. Delayed Detection – Anomalous behavior went unnoticed for months, raising concerns about the efficacy of real-time monitoring and alerting. 3. Stale and Unlocked Service Accounts: There were no policies in place for password rotation, inactivity lockout, or login attempt lockout for service accounts, making them vulnerable to brute-force or credential stuffing attacks. 4. Unaddressed Internal Warnings – Known risks flagged in prior audits related to email and access security had not been remediated in time. 5. Insufficient Conditional Access Policy Enforcement – The compromised account, linked to Azure, bypassed MFA and geo restrictions due to a poorly enforced conditional access framework. VPN usage further masked malicious activity. 𝙻̲𝚎̲𝚜̲𝚜̲𝚘̲𝚗̲ ̲𝚕̲𝚎̲𝚊̲𝚛̲𝚗̲𝚎̲𝚍̲:̲ 1. Enforce Microsoft Conditional Access Policies – Ensure all accounts, including service accounts, are subject to robust Conditional Access, MFA, and geo-restrictions. 2. Tighten Access Control – Limit and monitor privileges of administrative and service accounts; apply just-in-time access models. 3. Audit and Harden Service Accounts – Eliminate hardcoded credentials, enforce regular password rotation, enable account lockouts after failed login attempts, and setinactivity thresholds. 4. Strengthen Detection – Invest in behavioral analytics, adaptive authentication, and cloud-native threat detection tools. 5. Review and Limit Privileges – Conduct a review of privileged accounts and implement RBAC and JIT access where possible. 6. Ensure compliance with secure baseline configurations like those in DHS CISA BOD 25-01 - Secure Cloud Baseline [SCuBA] (stated in OCC response) The 𝗢𝗖𝗖 𝗯𝗿𝗲𝗮𝗰𝗵 is a cautionary tale—reactive controls alone are insufficient in today’s environment. Proactive hardening of identity, access, and cloud email infrastructure must be a top priority. https://lnkd.in/ef_4DQ3V
-
Navigating the Complexities of BEC Investigations: A Practical Guide Business Email Compromise (BEC) remains one of the most financially devastating cyber threats, relying heavily on social engineering and deception rather than malware. For cybersecurity professionals, a robust and practical approach to investigating these incidents is crucial. A recent blog post from Eric J., SecOps Engineer at Prophet Security titled "Investigating Business Email Compromise (BEC): A Practical Approach," offers invaluable insights into this process. Key takeaways and actionable advice include: ✅ Start with the Original Email: Always retrieve the original email to preserve vital metadata for forensic analysis. Forwarded messages often strip away critical information. ✅ Deep Dive into Email Headers: Examine authentication artifacts like SPF, DKIM, and DMARC to verify sender legitimacy. ✅ Investigate Sender Identity & Intent: Look for spoofed, compromised, or lookalike domains, including sophisticated homograph attacks. ✅ Analyze Behavioral Signals: Scrutinize authentication logs for unusual logins, malicious inbox rules, or other anomalous activities. ✅ Cross-Check Business Processes: Identify deviations from established financial or operational norms. ✅ Assess Lateral Impact: Determine the extent of mailbox exposure and potential lateral movement within your environment. ✅ Implement Swift Containment & Remediation: Immediate password resets, token revocation, disabling malicious inbox rules, and reporting to authorities are critical. ✅ Prioritize Prevention: Enforce MFA, block external email forwarding, deploy DMARC, and conduct regular security awareness training. Understanding these steps can significantly enhance your team's ability to detect, investigate, and mitigate BEC attacks effectively.
-
You can spend $150K+ on security tools and still get breached badly. I just saw it happen. “We just investigated a 2-month-long breach. All the alerts were there. No one was watching.” Let that sink in. The client had invested in all the big names: Darktrace Microsoft Defender Azure AD CrowdStrike These tools cost well over $150,000 a year — and they worked. They detected unusual logins. They flagged suspicious behavior. They triggered alerts on data exfiltration. But no one saw them. There was no SOC — no team to monitor, investigate, escalate, or respond. The result? The attacker stayed inside for two full months Email accounts were compromised Sensitive data was exfiltrated Not a single alert was actioned Here’s the uncomfortable truth: Security tools without a SOC are like CCTV cameras with no one watching the feed. It doesn’t matter how advanced your tech stack is — if no one’s responding, it’s just noise. And in this case, $150K+ worth of noise. A SOC isn’t a “nice to have.” It’s essential. It’s the operational core of cybersecurity. Without it, you’re not secure — you’re just lucky. It’s the difference between: Stopping a breach in minutes vs. Discovering it months too late If your organization is spending six figures on security tools, but has no SOC — you don’t have a cybersecurity strategy. You have a very expensive blind spot. Ask yourself: are we truly secure… or just fortunate nothing's happened yet? #DigitalForensics #DFIR #ThreatHunting #MalwareAnalysis #PacketAnalysis #NetworkForensics #MemoryForensics #LogAnalysis #SIEMTools #EndpointDetection #EDR #XDR #SecurityAnalytics #InvestigationTools #CyberInvestigation
-
Alert fatigue isn’t about too many alerts - it’s about bad data. Most alerts are triggered by unstructured, noisy, and misaligned telemetry data that was never meant to support detection. This overwhelms analysts, delays response, and lets threats through. Nearly 70% of security professionals admit to ignoring alerts due to fatigue (Ponemon Institute). SIEMs and XDRs don’t generate signal - they match patterns. Feed them noise, and they flood you with irrelevant alerts, and security teams are paying the price. It’s time to stop blaming the analyst and start fixing the pipeline. Most alert fatigue write-ups focus on SOC workflows: triage better, automate more, throw some ML at it. But those are band-aids. Until we fix the pipeline, the fatigue will remain. A modern SOC doesn’t need more alerts. It needs smarter data pipelines that are built to: - Consolidate preprocess and normalize data , so that your tools aren’t reconciling a dozen formats on the fly. When logs from endpoints, identity systems, and cloud services speak the same language, correlation becomes intelligence—not noise. That failed login from a workstation means something different when it's paired with a privilege escalation and a large outbound transfer. You don’t catch that unless the pipeline is unified and context-aware. Drop what doesn’t matter. A log that doesn’t support a detection, investigation, or response decision doesn’t belong in the SIEM. Route it to cold storage, summarize it, or don’t collect it at all. Most environments are filled with verbose, duplicative, or irrelevant logs that generate alerts no one asked for. Use threat intelligence strategically—not universally. Pulling in every IP from a threat feed doesn't help unless that feed aligns with your risk surface. Contextual TI means tagging what matters to you, not just what’s noisy globally. Your DNS logs don’t need to explode every time an off-the-shelf IOC list gets updated. Apply meaningful prioritization frameworks at ingestion. Don’t wait for analysts to triage alerts—start triaging them in the pipeline. Align event severity to frameworks like MITRE ATT&CK or your own critical asset map. An alert from a privileged system running in production isn't the same as one from a dev box. Your pipeline should know that. You don’t fix alert fatigue by muting rules - you can fix it by sending only data that supports detection or response. Fix the data. The alerts will fix themselves.
-
Equifax lost $1.4 Billion due to Alert Management One contributing factor: alert fatigue and overwhelming notifications. When your security monitoring generates thousands of alerts daily, real threats get lost in the noise. The breach lasted months because legitimate security alerts were buried among false positives. This isn't just about Equifax. Alert fatigue occurs when an excessive number of alerts are generated by monitoring systems or when alerts are irrelevant or unhelpful, leading to a diminished ability to see critical issues. Security teams become numb to notifications. Critical warnings blend into background noise. Real incidents look identical to false alarms. The Equifax breach serves as a cautionary tale about alert management failures. When everything is urgent, nothing is urgent. When everything is an alert, nothing gets attention. Your security monitoring system is only as good as your ability to distinguish signal from noise. 95% false positives mean 5% real threats. But which 5%? Most security analysts spend a third of their workday investigating false alarms. That's not security work. That's data archaeology. The tools meant to protect you are preventing protection. Alert fatigue doesn't just reduce efficiency. It creates security vulnerabilities. When your monitoring system trains people to ignore alerts, you've created the perfect camouflage for real attacks. How do you maintain security vigilance in an environment of constant false alarms?