𝗜𝗻𝗱𝗶𝗿𝗲𝗰𝘁 𝗣𝗿𝗼𝗺𝗽𝘁 𝗜𝗻𝗷𝗲𝗰𝘁𝗶𝗼𝗻 𝗵𝗮𝘀 𝗯𝗲𝗰𝗼𝗺𝗲 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗰𝗼𝗺𝗺𝗼𝗻 𝗮𝘁𝘁𝗮𝗰𝗸 𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀 𝘄𝗲 𝘀𝗲𝗲 𝗮𝗰𝗿𝗼𝘀𝘀 𝗿𝗲𝗮𝗹 𝗔𝗜 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀. The reason is simple. These attacks enter through the places teams rarely look. Hidden instructions sit inside the data your AI consumes every day. Webpages. PDFs. Emails. #MCP metadata. #RAG documents. Memory stores. Code comments. Once the model reads the poisoned content, the instructions blend into its context and shape behavior without any user interaction. Here is what the lifecycle actually looks like: 1️⃣ 𝗣𝗼𝗶𝘀𝗼𝗻 𝘁𝗵𝗲 𝘀𝗼𝘂𝗿𝗰𝗲 2️⃣ 𝗔𝗜 𝗶𝗻𝗴𝗲𝘀𝘁𝘀 𝘁𝗵𝗲 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 3️⃣ 𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀 𝗮𝗰𝘁𝗶𝘃𝗮𝘁𝗲 4️⃣ 𝗧𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝘁𝗿𝗶𝗴𝗴𝗲𝗿𝘀 𝗵𝗮𝗿𝗺𝗳𝘂𝗹 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 We have published a full breakdown of how these attacks unfold in practice, why #agentic systems amplify the impact, and which architectural controls help reduce the risk. If you are building or securing #GenAI applications, this is a pattern worth understanding early. 🔗 𝗟𝗶𝗻𝗸 𝘁𝗼 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗮𝗿𝘁𝗶𝗰𝗹𝗲 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁 𝗯𝗲𝗹𝗼𝘄 👉 𝘐𝘯𝘥𝘪𝘳𝘦𝘤𝘵 𝘗𝘳𝘰𝘮𝘱𝘵 𝘐𝘯𝘫𝘦𝘤𝘵𝘪𝘰𝘯: 𝘛𝘩𝘦 𝘏𝘪𝘥𝘥𝘦𝘯 𝘛𝘩𝘳𝘦𝘢𝘵 𝘉𝘳𝘦𝘢𝘬𝘪𝘯𝘨 𝘔𝘰𝘥𝘦𝘳𝘯 𝘈𝘐 𝘚𝘺𝘴𝘵𝘦𝘮𝘴 👉
Lakera
Software Development
Customers rely on Lakera for real-time security that doesn’t slow down their GenAI applications.
About us
Lakera is the world’s leading real-time GenAI security company. Customers rely on the Lakera AI Security Platform for security that doesn’t slow down their AI applications. To accelerate secure adoption of AI, the company created Gandalf, an educational platform, where more than one million users have learned about AI security. Lakera uses AI to continuously evolve defenses, so customers can stay ahead of emerging threats. Join us to shape the future of intelligent computing: www.lakera.ai/careers
- Website
-
https://lakera.ai
External link for Lakera
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- San Francisco
- Type
- Privately Held
- Founded
- 2021
- Specialties
- llm, GenAI, AI security, machine learning, and artificial intelligence
Locations
-
Primary
Get directions
San Francisco, US
Employees at Lakera
Updates
-
𝗧𝗵𝗲 𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 𝗶𝘀 𝗷𝘂𝘀𝘁 𝗮𝗯𝗼𝘂𝘁 𝘁𝗼 𝗴𝗲𝘁 𝗮 𝗺𝗮𝗷𝗼𝗿 𝘂𝗽𝗴𝗿𝗮𝗱𝗲. #MCP has quietly become the wiring behind modern #agentic systems, and with the new spec landing on 𝗡𝗼𝘃𝗲𝗺𝗯𝗲𝗿 𝟮𝟱, the protocol finally steps into real enterprise territory. #Lakera’s own Steve Giguere just published a crisp breakdown of 𝘄𝗵𝗮𝘁’𝘀 𝗰𝗵𝗮𝗻𝗴𝗲𝗱, 𝘄𝗵𝘆 𝗶𝘁 𝗺𝗮𝘁𝘁𝗲𝗿𝘀, and 𝗵𝗼𝘄 𝘁𝗵𝗲𝘀𝗲 𝘂𝗽𝗱𝗮𝘁𝗲𝘀 𝗿𝗲𝘀𝗵𝗮𝗽𝗲 𝘁𝗵𝗲 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗺𝗼𝗱𝗲𝗹 for anyone building or defending #AIagents. ⚙️🤖 If MCP powers your workflows (or soon will) this is the piece you’ll want to read. 👉 𝘞𝘩𝘢𝘵 𝘵𝘩𝘦 𝘕𝘦𝘸 𝘔𝘊𝘗 𝘚𝘱𝘦𝘤𝘪𝘧𝘪𝘤𝘢𝘵𝘪𝘰𝘯 𝘔𝘦𝘢𝘯𝘴 𝘵𝘰 𝘠𝘰𝘶, 𝘢𝘯𝘥 𝘠𝘰𝘶𝘳 𝘈𝘨𝘦𝘯𝘵𝘴 👉 https://lnkd.in/dpcN_5Y7
-
We’re joining the Zürich AI Meetup next Friday 🎙️ Catch Max Mathys on stage with insights from #Gandalf and Gandalf: Agent Breaker. Max will walk through what thousands of real attacks reveal about agentic systems today, the techniques that break them most often, and what this means for the state of AI security in 2025 🔍 If you want a clear picture of where #GenAI risks are heading and what we’re uncovering through Agent Breaker, don’t miss this session. #AI #Security #Agents #Zürich #Meetup #Lakera
First speaker for Fri, Nov 28 🎙️ We’re excited to welcome Max Mathys (Lakera) with: Agent Security and Gandalf — Insights from the World’s Largest Red Team Max will share hard data from Gandalf’s massive prompt-injection challenge: the most effective attack patterns, where agentic/LLM systems actually fail in practice, and what it really takes to secure GenAI beyond traditional appsec. RSVP & details: https://zurichai.club Stay tuned for the next speakers 📣 #ZurichAI #AI #Security #Agents #Zürich #Meetup
-
𝗢𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗰𝗹𝗲𝗮𝗿𝗲𝘀𝘁 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝘁𝗵𝗲 𝗻𝗲𝘄 𝗕𝗮𝗰𝗸𝗯𝗼𝗻𝗲 𝗕𝗿𝗲𝗮𝗸𝗲𝗿 𝗕𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝘀𝘂𝗿𝗽𝗿𝗶𝘀𝗲𝗱 𝗲𝘃𝗲𝗻 𝘂𝘀. 🧩 Models that reason step by step are harder to break. When we evaluated 31 popular LLMs using threat snapshots built from 194,000 real human attack attempts, a consistent pattern emerged: LLMs that “think out loud” were about 𝟭𝟱% 𝗹𝗲𝘀𝘀 𝘃𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗹𝗲 to injection-based attacks. 𝗪𝗵𝘆? Reasoning gives models a brief moment to evaluate malicious context instead of acting on it immediately. That single pause changes how they handle adversarial pressure, and makes exploitation noticeably harder. As AI agents take on more autonomy, this matters. The way a model reasons shapes how it behaves under attack, not just how well it performs on tasks. The full analysis is here, including how different backbones responded under real adversarial conditions: 👉 https://lnkd.in/dZ38Tpjp
-
🌐 𝗪𝗵𝗲𝗻 𝗮𝗴𝗲𝗻𝘁𝘀 𝗯𝗿𝗼𝘄𝘀𝗲, 𝘁𝗵𝗲 𝘄𝗲𝗯 𝘀𝘁𝗼𝗽𝘀 𝗯𝗲𝗶𝗻𝗴 𝗽𝗮𝘀𝘀𝗶𝘃𝗲 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻. 𝗜𝘁 𝘁𝘂𝗿𝗻𝘀 𝗶𝗻𝘁𝗼 𝗮𝗰𝘁𝗶𝗼𝗻𝗮𝗯𝗹𝗲 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗶𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲𝗶𝗿 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲. In 𝗣𝗮𝗿𝘁 𝟮 of our 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗧𝗵𝗿𝗲𝗮𝘁𝘀 series, we look at what happens when an agent pulls content from a live webpage and treats that content as trusted context. Across browsing-enabled systems, the same pattern keeps emerging: 🔷 Webpages supply structured data that models treat as instructions 🔷 Hidden content in HTML or images slips into the reasoning chain 🔷 A single poisoned asset can redirect an agent toward attacker-defined actions In 𝗧𝗿𝗶𝗽𝗽𝘆 𝗣𝗹𝗮𝗻𝗻𝗲𝗿, our 𝘎𝘢𝘯𝘥𝘢𝘭𝘧: 𝘈𝘨𝘦𝘯𝘵 𝘉𝘳𝘦𝘢𝘬𝘦𝘳 challenge, this plays out in a clean, controlled environment. A travel assistant fetches an itinerary from the web and unknowingly republishes 𝗮 𝗺𝗮𝗹𝗶𝗰𝗶𝗼𝘂𝘀 𝗹𝗶𝗻𝗸 𝗽𝗹𝗮𝗻𝘁𝗲𝗱 𝗶𝗻𝘀𝗶𝗱𝗲 𝗮 𝘄𝗲𝗯𝗽𝗮𝗴𝗲. No confrontation. No attacker in the loop. The agent simply follows what it sees. Real systems show the same behaviour. Browsing agents ingest 𝗛𝗧𝗠𝗟, 𝘀𝗰𝗿𝗶𝗽𝘁𝘀, 𝗮𝗹𝘁 𝘁𝗲𝘅𝘁, 𝗦𝗩𝗚 𝗺𝗲𝘁𝗮𝗱𝗮𝘁𝗮, even 𝗘𝗫𝗜𝗙 𝗳𝗶𝗲𝗹𝗱𝘀, and then reason over that data as if it were part of the user’s request. Once retrieval becomes interpretation, the line between “𝘷𝘪𝘦𝘸𝘪𝘯𝘨” and “𝘦𝘹𝘦𝘤𝘶𝘵𝘪𝘯𝘨” gets very thin. 𝗣𝗮𝗿𝘁 𝟮 walks through how these browsing pathways turn into injection surfaces, how attackers are already exploiting them, and why runtime guardrails are becoming a prerequisite for safe autonomy. 👉 𝗟𝗶𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗰𝗼𝗺𝗺𝗲𝗻𝘁. #GenAI #AISecurity #AgenticAI #CyberSecurity #AppSec #RedTeam #LLM #Lakera #GandalfAgentBreaker
-
-
Lakera reposted this
🤝 Two leaders in AI security just joined forces. Check Point's CloudGuard WAF + Lakera now delivers best-in-class prevention for GenAI apps, APIs, and agents — blocking prompt injection, data leakage, and abuse in real time! If you're building with AI, this is the upgrade your security stack has been waiting for. Explore how this changes the security game 👇 https://lnkd.in/gKP5BCW7 #CyberSecurity #GenAI
-
-
#𝗢𝗪𝗔𝗦𝗣 𝗚𝗹𝗼𝗯𝗮𝗹 #𝗔𝗽𝗽𝗦𝗲𝗰 𝗗𝗖 𝟮𝟬𝟮𝟱 𝘄𝗿𝗮𝗽𝗽𝗲𝗱 𝗹𝗮𝘀𝘁 𝘄𝗲𝗲𝗸, 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝘀 𝗮𝗰𝗿𝗼𝘀𝘀 𝘁𝗵𝗲 𝗰𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗳𝗹𝗼𝗼𝗿 𝗺𝗮𝗱𝗲 𝗼𝗻𝗲 𝘁𝗵𝗶𝗻𝗴 𝗰𝗹𝗲𝗮𝗿: 𝗔𝗜 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗵𝗮𝘀 𝗺𝗼𝘃𝗲𝗱 𝘁𝗼 𝘁𝗵𝗲 𝗰𝗲𝗻𝘁𝗲𝗿 𝗼𝗳 𝘁𝗵𝗲 𝗔𝗽𝗽𝗦𝗲𝗰 𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆. 🔐 If you could not make it to Washington, we have a first-hand recap from our own Steve Giguere, who spent the week in the 𝗕𝗿𝗲𝗮𝗸𝗲𝗿 𝗧𝗿𝗮𝗰𝗸, together with Hassan Hemeid. Steve's write-up captures the themes that kept coming up again and again. These included real incidents, agentic behavior in production, the shift toward offensive AI testing and what the community is preparing for next. You will also find a few photos and a short cyberpunk-style trailer Steve shot on site to give you a feel for the atmosphere. 🎥 Read the full field report here 👉 https://lnkd.in/dTaFz8YQ
-
𝗣𝗿𝗶𝘃𝗮𝗰𝘆 𝗶𝘀𝗻’𝘁 𝘁𝗵𝗲 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝗚𝗲𝗻𝗔𝗜 𝗰𝗼𝗻𝗰𝗲𝗿𝗻 𝗮𝗻𝘆𝗺𝗼𝗿𝗲 🔍 Last year, 𝟳𝟯% of organizations named privacy as their top risk. This year the number dropped to 𝟰𝟲%. The frontier has shifted. Teams are still thinking about privacy, but they’re increasingly worried about what comes 𝘢𝘧𝘵𝘦𝘳 privacy: ⚠️ Adversarial misuse 🤖 Agentic failures 🛠️ Offensive AI capabilities 🔗 Multi-agent cascades and unpredictable chains of actions In practice, defenses are moving from compliance to confrontation. Instead of asking “𝘈𝘳𝘦 𝘸𝘦 𝘩𝘢𝘯𝘥𝘭𝘪𝘯𝘨 𝘥𝘢𝘵𝘢 𝘤𝘰𝘳𝘳𝘦𝘤𝘵𝘭𝘺?”, teams are asking “𝘞𝘩𝘢𝘵 𝘤𝘢𝘯 𝘵𝘩𝘪𝘴 𝘮𝘰𝘥𝘦𝘭 𝘣𝘦 𝘮𝘢𝘯𝘪𝘱𝘶𝘭𝘢𝘵𝘦𝘥 𝘵𝘰 𝘥𝘰?” 📘 The 𝗟𝗮𝗸𝗲𝗿𝗮 𝟮𝟬𝟮𝟱 𝗚𝗲𝗻𝗔𝗜 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗥𝗲𝗮𝗱𝗶𝗻𝗲𝘀𝘀 𝗥𝗲𝗽𝗼𝗿𝘁 tracks this shift across roles, company sizes, and adoption stages and shows how quickly the risk landscape is evolving. 🔗 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗿𝗲𝗽𝗼𝗿𝘁: https://lnkd.in/dzFe28_U #GenAI #AIsecurity #CyberSecurity #LLMs #Lakera #RedTeam
-
-
🚨 𝗣𝗮𝗿𝘁 𝟮 𝗼𝗳 𝗼𝘂𝗿 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗧𝗵𝗿𝗲𝗮𝘁𝘀 𝘀𝗲𝗿𝗶𝗲𝘀 𝗶𝘀 𝗹𝗶𝘃𝗲. 𝗧𝗵𝗶𝘀 𝗰𝗵𝗮𝗽𝘁𝗲𝗿 𝗴𝗼𝗲𝘀 𝘀𝘁𝗿𝗮𝗶𝗴𝗵𝘁 𝗮𝘁 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝘄𝗲𝗮𝗸 𝘀𝗽𝗼𝘁𝘀 𝗶𝗻 𝗺𝗼𝗱𝗲𝗿𝗻 𝗮𝗴𝗲𝗻𝘁𝘀: 𝗼𝘃𝗲𝗿-𝗽𝗿𝗶𝘃𝗶𝗹𝗲𝗴𝗲𝗱 𝘁𝗼𝗼𝗹𝘀. 🚨 Across real deployments, the same pattern shows up again and again: 🔷 Agents inherit broad capabilities they were never meant to use 🔷 Tools stay connected long after the feature that needed them is gone 🔷 A single function call can trigger actions with real operational impact In 𝗧𝗵𝗶𝗻𝗴𝘂𝗹𝗮𝗿𝗶𝘁𝘆, our 𝗚𝗮𝗻𝗱𝗮𝗹𝗳: 𝗔𝗴𝗲𝗻𝘁 𝗕𝗿𝗲𝗮𝗸𝗲𝗿 challenge, this dynamic is on full display. A simple shopping assistant turns out to have access to ordering, refunding, emailing, and internal inventory tools. Once players start testing its boundaries, the assistant reveals just how much freedom it has. We see similar exposures across MCP-based IDEs, workflow agents, and early browsing-enabled assistants. When tool permissions are wide open, normal reasoning becomes a path into high-impact actions. Scope drift compounds, and the agent’s capability map starts to look more like an attack surface. 𝗣𝗮𝗿𝘁 𝟮 breaks down 𝘄𝗵𝘆 this happens, 𝗵𝗼𝘄 these permissions stack up beneath the surface, and 𝘄𝗵𝗮𝘁 recent incidents from Lakera’s research reveal about the broader trend. 👉 𝗟𝗶𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗰𝗼𝗺𝗺𝗲𝗻𝘁. #GenAI #AISecurity #AgenticAI #RedTeam #CyberSecurity #AppSec #LLM #Lakera
-
-
𝗧𝗵𝗿𝗼𝘄𝗯𝗮𝗰𝗸 𝘁𝗼 #𝗢𝗪𝗔𝗦𝗣 𝗚𝗹𝗼𝗯𝗮𝗹 𝗔𝗽𝗽𝗦𝗲𝗰 𝗗𝗖 𝟮𝟬𝟮𝟱 🛡️ Last week’s event made one thing clear: AI security has taken center stage in the AppSec community. From Daniel Miessler 🛡️’s keynote on the rise of AI-powered “skills” to a packed Breaker Track where nearly every talk explored GenAI threats, one question kept coming up: 𝘩𝘰𝘸 𝘥𝘰 𝘸𝘦 𝘴𝘦𝘤𝘶𝘳𝘦 𝘸𝘩𝘢𝘵 𝘸𝘦’𝘷𝘦 𝘣𝘶𝘪𝘭𝘵? Lakera was proud to sponsor both the keynote and the Breaker Track, and even prouder to see 𝗚𝗮𝗻𝗱𝗮𝗹𝗳 and 𝗔𝗴𝗲𝗻𝘁 𝗕𝗿𝗲𝗮𝗸𝗲𝗿 pop up in multiple sessions (plus a surprise shout-out from Jason Haddix 🔥). 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁 𝗼𝗳 𝘁𝗵𝗲 𝘄𝗲𝗲𝗸: Hearing people across sessions and after-hours chats talk about AI red teaming, guardrails, and real-world model security as must-have capabilities for today, not ideas for the future. Thanks to everyone who stopped by, shared thoughts, or joined the discussion. AI security doesn’t feel like a side topic anymore, it’s 𝘵𝘩𝘦 topic. #OWASP #AIsecurity #AppSec #RedTeam #GenAI #Lakera