AI in Cybersecurity

Explore top LinkedIn content from expert professionals.

  • View profile for María Luisa Redondo Velázquez

    IT Cybersecurity Director | Tecnology Executive | Security Strategy and Digital transformation - Security Architecture & Operations | Cloud Expertise | Malware Analysis, TH and Threat Intelligence | Board Advisor

    8,687 followers

    📛 CVE 2025 32711 is a turning point Last week, we saw the first confirmed zero click prompt injection breach against a production AI assistant. No malware. No links to click. No user interaction. Just a cleverly crafted email quietly triggering Microsoft 365 Copilot to leak sensitive org data as part of its intended behavior. Here’s how it worked: • The attacker sent a benign-looking email or calendar invite • Copilot ingested it automatically as background context • Hidden inside was markdown-crafted prompt injection • Copilot responded by appending internal data into an external URL owned by the attacker • All of this happened without the user ever opening the email This is CVE 2025 32711 (EchoLeak). Severity 9.3 Let that sink in. The AI assistant did exactly what it was designed to do. It read context, summarized, assisted. But with no guardrails on trust boundaries, it blended attacker inputs with internal memory. This wasn’t a user mistake. It wasn’t a phishing scam. It was a design flaw in the AI data pipeline itself. 🧠 The Novelty What makes this different from prior prompt injection? 1. Zero click. No action by the user. Sitting in the inbox was enough 2. Silent execution. No visible output or alerts. Invisible to the user and the SOC 3. Trusted context abuse. The assistant couldn’t distinguish between hostile inputs and safe memory 4. No sandboxing. Context ingestion, generation, and network response occurred in the same flow This wasn’t just bad prompt filtering. It was the AI behaving correctly in a poorly defined system. 🔐 Implications For CISOs, architects, and Copilot owners - read this twice. → You must assume all inputs are hostile, including passive ones → Enforce strict context segmentation. Copilot shouldn’t ingest emails, chats, docs in the same pass → Treat prompt handling as a security boundary, not just UX → Monitor agent output channels like you would outbound APIs → Require your vendors to disclose what their AI sees and what triggers it 🧭 Final Thought The next wave of breaches won’t look like malware or phishing. They will look like AI tools doing exactly what they were trained to do but in systems that never imagined a threat could come from within a calendar invite. Patch if you must. But fix your AI architecture before the next CVE hits.

  • Imagine receiving what looks like a routine business email. You never even open it. Within minutes, your organisation’s most sensitive data is being silently transmitted to attackers. This isn’t science fiction. It happened with EchoLeak. AIM Security’s research team discovered the first zero-click AI vulnerability, targeting Microsoft 365 Copilot. The attack is elegant and terrifying: a single malicious email can trick Copilot into automatically exfiltrating email histories, SharePoint documents, Teams conversations, and calendar data. No user interaction required. No suspicious links to click. The AI agent does all the work for the attacker. Here’s what caught my attention as a security professional: The researchers bypassed Microsoft’s security filters using conversational prompt injection – disguising malicious instructions as normal business communications. They exploited markdown formatting quirks that Microsoft’s filters missed. Then they used browser behaviour to automatically trigger data theft when Copilot generated responses. Microsoft took five months to patch this (CVE-2025-32711). That timeline tells you everything about how deep this architectural flaw runs. The broader implication: this isn’t a Microsoft problem, it’s an AI ecosystem problem. Any AI agent that processes untrusted inputs alongside internal data faces similar risks. For Australian enterprises racing to deploy AI tools, EchoLeak exposes a critical blind spot. We’re securing the AI like it’s traditional software, but AI agents require fundamentally different security approaches. The researchers call it “LLM Scope Violation” – when AI systems can’t distinguish between trusted instructions and untrusted data. It’s a new vulnerability class that existing frameworks don’t adequately address. Three immediate actions for security leaders: • Implement granular access controls for AI systems • Deploy advanced prompt injection detection beyond keyword blocking • Consider excluding external communications from AI data retrieval EchoLeak proves that theoretical AI risks have materialised into practical attack vectors. The question isn’t whether similar vulnerabilities exist in other platforms – it’s when they’ll be discovered. #AISecurity #CyberSecurity #Microsoft365 #EnterpriseAI #InfoSec #Australia #TechLeadership https://lnkd.in/gNfxV3Nk

  • View profile for Michel Lieben 🧠

    Founder / CEO @ ColdIQ | Scale Outbound with AI & Tech 👉 coldiq.com

    61,415 followers

    I spent 300 hours researching 1,500+ software. Here's how to upgrade your tech stack with AI: 1️⃣ Use AI apps for building leads lists Your ability to put the message in front of the right person is the biggest predictor of your outreach campaign's success. Some AI applications make the process easier by helping you: - Build lead lists - Deeply research prospects - Segment & score prospect Examples include: - Relevance AI Prospect Researcher and Enrichment agents - Clay’s AI Agent and Integration with 100+ data platforms. - Exa’s AI Research & Data Sourcing Agent 2️⃣ Use AI to orchestrate complex workflows Running cold email campaigns used to be fragmented across several platforms and involved moving around .csv files from one tool to another. Workflow builders & AI agents now help you: - Build a list  - Score your leads - Find their Emails - Verify these Emails - Personalise your messaging with AI - And import to leads to a sales engagement platform … all that from a single tool. Examples for these tools include Clay, Relevance AI, Common Room & Unify. 3️⃣ Use AI to improve your deliverability A message that doesn’t get read, doesn’t get replied to. Every year, deliverability is getting harder to crack. Thankfully, a few sending platforms are leveraging AI to help with: - Secondary domains & mailboxes setup - Automated Spintaxxing - Email warm-ups Examples include: Instantly.ai, lemlist & Woodpecker.co. 4️⃣ Use AI to help close your deals We’re still far from seeing AI successfully replacing humans in Google Meet. But the AI bots are already listening to these sales conversations. They already help by: - Taking notes from convos - Surfacing insights from individual conversations - Surfacing aggregate insights from MANY conversations For example, one thing I like to do is ask Attention what our prospects have asked most in our latest 100+ sales conversations. Once I know that, I can address these points in our messaging. P.S: Any unheard-of use case you’ve seen with AI applied to go-to-market?

  • View profile for Himanshu Jindal

    CEH V11 🏆 || CCSK V.4 🏆 || AZ-900 🏆 || SC-900 🏆 || SC-100 🏆 || LogRhythm & Splunk & QRadar 🕵🏼♂️|| EDR 🔐 || Mimecast ✉️ || OneLogin (IAM) 🔑 || AI Security ֎ || Incident Response 🚨

    5,482 followers

    SIEM Use Cases for Email Exchange 📧 In the world of cybersecurity, email exchanges are a crucial battleground. Here are some SIEM use cases that play a vital role in fortifying your organization's email security: 👉Top 10 External Communicators: Identify the top users sending emails to external domains. Understanding this communication flow helps monitor external interactions effectively. 👉Email Activity Insights: Keep an eye on the top 10 email receivers and senders within your organization. This insight aids in understanding communication patterns and potential anomalies. 👉Data Leakage Identification: Utilize SIEM to detect data leakage through email channels. Ensure that sensitive information doesn't fall into the wrong hands. 👉Large File Monitoring: Track and manage large files sent via email. This helps in controlling data transfer sizes and ensuring compliance with security policies. 👉Malicious/Suspicious Attachments: Enhance your security posture by identifying and addressing emails with malicious or suspicious attachments promptly. 👉After-Hours Email Monitoring: Monitor emails going out from your company domain to other domains after office hours. This helps in identifying potential security risks during non-business hours. 👉Individual Email Bandwidth: Keep track of high email bandwidth utilization by individual users. Unusual spikes may indicate security threats or abnormal activities. 👉Undelivered Messages Detection: Detect undelivered messages promptly. This ensures that critical communications are not missed and addresses potential delivery issues. 👉Mailbox Security Incidents: Identify unauthorized access, such as mailbox access by another user or a user sending a message as another user. Strengthen your email security by detecting and responding to such incidents. 👉Login Anomalies: Detect users logging into mailboxes that are not their primary accounts. Unusual login patterns may signal compromised accounts. 👉Auto Redirected Mails: Stay vigilant for auto-redirected emails. Detect and prevent unauthorized forwarding of emails. 👉Internal Email Insights: Identify the top 10 users sending emails internally. This helps in understanding internal communication dynamics. 👉SMTP Gateway Monitoring: Monitor SMTP gateways for sudden spikes in incoming emails. Rapid increases may indicate potential security threats or attacks. 👉Rejected Mails Analysis: Keep an eye on a high number of rejected emails from a single "from" address. This helps in identifying and mitigating potential spam or phishing attempts. Utilize these SIEM use cases to strengthen your email security strategy and create a robust defense against evolving cyber threats.

  • View profile for Gude Venkata Chaithanya

    12k+ Linkedin | Cyber Security Enthusiast 🔐 | Networking 💻 | Aspiring SOC Analyst 👨💻 | Passionate About Blue Teaming & Threat Hunting 🛡️ | Helping Students Break into Cyber🚀 | Sharing Tech Insights on LinkedIn 📢

    12,306 followers

    🛡️ SOC Project: Phishing Email Detection Using Splunk 🚨 In the fight against cyber threats, email remains one of the most exploited vectors — and phishing is often the attacker’s first step. 🎯 As part of a hands-on SOC (Security Operations Center) project, I developed a phishing detection system using Splunk, targeting suspicious email content and attachments. 🔍 Key Highlights: ✅ Parsed email gateway logs (Exchange, Proofpoint) ✅ Detected phishing patterns using SPL (e.g., subject="*password*", attachment="*.exe") ✅ Created visual dashboards: 🚨 Suspicious Emails by Sender 📎 Suspicious Attachments (.exe) 📬 Phishing Email Subjects ✅ Integrated tools like VirusTotal, URLScan, and EmailRep for deeper investigation 💡 Demonstrated Skills: • SIEM log analysis • Email forensics • Regex-based detection • Threat hunting & reporting • Dashboarding with Splunk and Python (optional) 📊 This project not only strengthened my threat detection skills, but also taught me the value of proactive email defense in enterprise environments. 🔗 Want to see the full dashboard or walkthrough? Drop a comment or DM me. #CyberSecurity #SOCAnalyst #Splunk #PhishingDetection #SIEM #ThreatHunting #EmailSecurity #IncidentResponse #SOC #InfoSec #CyberDefense #SecurityOperations #SplunkDashboards #MalwareAnalysis #SecurityMonitoring #PhishingEmails #SOCProjects #SecurityTools #SecurityResearch #SecurityAnalytics #BlueTeam #Regex #SOCWorkflows #ThreatIntelligence #SecurityEngineer #EmailGateway #CyberSkills #NetworkSecurity #SOCPlaybook #PythonSecurity #SIEMTools #SOCTraining #SOCExperience #CybersecurityAwareness #DigitalForensics #SIEMUseCases #SecurityUseCases #SecurityAlerting #CyberThreats #MaliciousAttachments #SplunkSPL #SecurityDashboards #EmailThreats #InfosecCommunity #EndpointSecurity #MalwarePrevention #SplunkSecurity #ResumeProjects #EmailInvestigation #URLAnalysis #SecurityUseCaseDesign

  • View profile for Prashant Kumar

    CEH | SOC Lead | Endpoint Security | Kaspersky | TrendMicro |SOC | Incident Response | SIEM | IBM QRadar | SOAR | Resilient | Vulnerability Management | Qualys

    23,484 followers

    What is Email Phishing Analysis? When a suspicious email is reported to security team, what analysis will you perform as a SOC Analyst:- 1. Sender and Domain Analysis -Verify the Sender's Email ID and Domain. -Check the domain reputation using tools like: VirusTotal MXToolbox IPVoid -Analyze domain details: Registration date Owner information 2. Subject Line Analysis -Examine the subject line to determine the intent of the email: Phishing Social engineering Promotional content 3. Email Body Analysis -Look for Indicators of Compromise (IOCs), such as: Urgency Tactics: Example: "Reset your account within an hour, or it will be disabled." Phishing URLs: Embedded URLs (e.g., within an "unsubscribe" button) designed to mislead users. -Check the reputation of such URLs using trusted tools. Attachments: Analyze suspicious attachments in a sandbox to detect malicious behavior. Avoid uploading attachments to public repositories like VirusTotal to prevent attackers from detecting the investigation and potentially bypassing detection mechanisms. 4. Email Header Analysis -Obtain the email header from the email properties. Perform header analysis: Use MXToolbox: Select "Header Analysis." Paste the header and submit for a detailed report. Verify SPF, DKIM, and DMARC statuses. 5. SPF, DKIM, and DMARC Verification SPF (Sender Policy Framework) -Authentication protocol specifying which IP addresses are authorized to send emails for a domain. -SPF Alignment: If the "From" field matches the "Return-Path" field, SPF alignment passes; otherwise, it fails. -SPF Authentication: If the sender's IP is authorized to send on behalf of the domain, SPF authentication passes; otherwise, it fails. DKIM (DomainKeys Identified Mail) -Uses a digital signature to verify the sender’s domain and ensure email integrity. -DKIM Alignment: If the "DKIM Signature" domain matches the "From" domain, DKIM alignment passes; otherwise, it fails. -DKIM Authentication: If the DKIM signature is invalid, the email may have been modified during transit. DMARC (Domain-based Message Authentication, Reporting & Conformance) Builds on SPF and DKIM. -DMARC Policies: None: If SPF and DKIM both pass, the email is delivered to the inbox. Quarantine: If either SPF or DKIM fails, the email goes to the spam/junk folder. Reject: If both SPF and DKIM fail, the email is dropped/rejected. 6. Mail Gateway Analysis Review fields like: From To Return-Path Subject Line Message ID Verify how many users received the email from the same domain/email ID. Export email details for documentation. 7. Reporting and Mitigation Document: Analysis details Findings IOCs (Indicators of Compromise) GTI (Global Threat Intelligence) details Share the findings with relevant teams. Coordinate with Network/IT/Admin teams to: Block the malicious email, domain, IP, and hash.

  • View profile for Srini Kasturi

    CXO / NED / SMCR

    6,417 followers

    “Have your agent speak to my agent.” Coming soon to a workplace near you: - Calls by agents answered by agents. - Emails written and sent by agents read and responded to by agents. On the surface, this sounds like efficiency heaven — machines handling the noise so humans can focus on the signal. But beneath it lies a very real danger. When communication chains become machine-to-machine, we’re not just talking about faster workflows — we’re talking about new attack surfaces. The Risk Traditional phishing relies on human error: a misplaced click, a fake invoice, a spoofed email. With AI agents in the loop, the game changes: Prompt Injection: malicious actors embed hidden instructions inside messages, documents, or even data feeds. If an agent reads them, it may execute actions outside its intended scope. Agent Manipulation: a cleverly crafted request could trick one agent into leaking data, initiating transactions, or escalating privileges — and another agent may obediently carry out the chain reaction. Amplified Scale: unlike humans, agents don’t get tired, suspicious, or distracted. If compromised, they can be manipulated consistently, at speed, and at scale. This isn’t phishing as we know it. It’s phishing 2.0 — machine-to-machine deception, invisible to most of us until damage is already done. Staying Safe Organisations will need to rethink security in an agent-driven world: Guardrails & Sandboxing: ensure agents operate within strictly defined boundaries — never with unconstrained access. Input Validation: treat every external input (email, attachment, call transcript) as potentially hostile, even if it “looks” routine. Audit & Transparency: require logs, explanations, and human-visible checkpoints before sensitive actions. Zero-Trust Mindset: don’t assume a message from an “agent” is safe just because it came from a trusted domain. The future will be “agent-to-agent.” The challenge is to make sure it’s not “attacker-to-agent.” Because when your agent speaks to mine, we need to be confident they’re not both being played.

  • View profile for 🦾Eric Nowoslawski

    Founder Growth Engine X | Clay Enterprise Partner

    47,819 followers

    Smartlead and Instantly.ai integrating inbox spam tests has changed how we are monitoring email deliverability at Growth Engine X. Here’s an overview of what we are doing. First, for those that don’t know, an inbox placement test is when you send a test email to a group of emails that will report back to you where the email landing. Primary, Promotions, or Spam? Now it’s not perfect because your spam filter learns from what emails you mark as spam and obviously these inboxes have never marked anything as spam. So you should know that but I still find these tests useful enough to run now that they are integrated with the platforms. We are now running a test daily at 11 pm EST on all active campaigns and getting the inbox placements. Then, every Tuesday and Friday we will use an internal API to call to list out all inboxes that landed in spam, remove them from campaigns, and tag them so we don’t use them again. We always keep extra inboxes warming for our customers so we will push in the extra fresh inboxes as we remove the ones landing in spam. Everything can be done automatically except selecting the inboxes we will use to replace the ones in spam which I’m not sure is even worth automating. Hopefully, this gives something to think about for those that also don’t use open tracking and need a way to track email deliverability in their cold email campaigns!

  • View profile for Francis Odum

    Founder @ Software Analyst Cybersecurity Research

    28,228 followers

    One of the core themes I'm tracking closely (starting next month) is understanding the best solutions for preventing data exfiltration and the role that security for AI/LLMs companies will play in solving this issue for enterprises. I'm interested in seeing how the AI security category inflects this year in helping organizations prevent data leakage relative to other areas like data security (specifically data loss prevention (DLP)), which I wrote about last month. Let's explore the relationship between data security (DLP-focused) vendors relative to security for AI vendors for a moment. While there are many AI security vendors, I find it interesting to see what Prompt Security has built in and around preventing data leakage. The rise of ChatGPT and Microsoft 365 Copilot continues to transform how enterprises work—but it’s also exposing them to new data risks that legacy Data Loss Prevention (DLP) solutions weren’t built to handle. We've seen GenAI introduce dynamic risks around: - Shadow AI: Undetected tools used by employees. - Prompt Injection: Malicious manipulation of AI outputs. - Sensitive data leaks: Unintentional data exposure during AI interactions. What I'm seeing is that AI security companies like Prompt Security or others are managing this risk for organizations better in Gen-AI enterprise stack. Unlike legacy DLP / Data security vendors, they are showing better promise at: 1) Redacting sensitive data in real-time before it reaches GenAI tools. For example, we see better detection capabilities from pattern matching to contextual AI-based detection: for instance, DLPs like Zscaler can detect a social security number, but companies like Prompt can detect a corporate document with intellectual property better. 2) Better at detecting unauthorized AI tool usage (Shadow AI) across M365 AI tools, Github co-pilots and many more 3) Better at preventing AI-specific attacks like prompt injections. 4) These companies are able to surface educational popups so that employees or users are aware of when they're using an AI site or have violated the company AI policy 5) Full observability of AI usage and ensuring compliance. In general, AI security startups like prompt security (and a few others too) are showing they can dynamically adapt to the fluid, unstructured nature of data as it deals with GenAI interactions and take actions as needed with an agent or extension. In 2025, as more organizations embrace GenAI to stay competitive, data security is top of mind / foundational, so it'll be interesting to see how GenAI startups vs legacy DLP / data security vendors interact in this market. This is a trend to watch and I'll be uncovering this theme closely later next month!

Explore categories