Your Most Dangerous Dependency Isn’t in Your SBOM, It’s Human
There’s no patch for a human exploit chain.

Your Most Dangerous Dependency Isn’t in Your SBOM, It’s Human

We’re obsessed with securing our software supply chains. We invest millions in scanning tools, Software Bill of Materials (SBOMs), and zero-trust architectures, chasing every CVE as if it were a ticking bomb. But the real threat isn’t hiding in your dependencies or your code. It’s sitting in your chair, sipping coffee, and trusting AI agents to do the heavy lifting. Recent AI vulnerabilities, such as the GitHub MCP exploits and Microsoft 365 Copilot’s EchoLeak, prove it: our blind spot isn’t technical; it’s human.

Picture this: you’re a CTO, and your team relies on AI coding assistants like Claude Desktop or Copilot, wired into GitHub via the Model Context Protocol (MCP). You’ve given these agents access to your repositories, public and private, because they’re supposed to make your developers faster, brighter, better. Then, one day, a malicious GitHub issue lands in a public repo. It’s innocuous, just a comment. But when your AI agent scans it, it’s game over. That single issue tricks the agent into leaking sensitive data, proprietary code, salary details, and even your following product roadmap straight into a public pull request for the world to see. No malware, no stolen credentials, just a cleverly crafted prompt exploiting what researchers call “toxic agent flows.”

This isn’t a hypothetical. It’s real. Security teams at Invariant Labs and others have demonstrated how MCP’s architecture, used by thousands of organizations, is vulnerable to prompt injection attacks . An attacker doesn’t need to hack your server or break your encryption. They need to understand how humans delegate trust to AI. And we’re delegating way too much. Similarly, the EchoLeak vulnerability in Microsoft 365 Copilot allows attackers to hijack the AI with a single email, requiring no user interaction, and expose sensitive data across enterprises.

The Lethal Trifecta: Why Humans Are the Weak Link

These vulnerabilities expose what has been dubbed the “lethal trifecta” of AI risks: privileged access to private data, exposure to untrusted inputs, and the ability to exfiltrate sensitive information. Sound technical? It’s not. It’s a human problem dressed up in tech jargon. We grant AI agents broad permissions because we’re busy. We assume they’ll “just work” because intelligent people at Anthropic or Microsoft build them. We skip the fine print on OAuth scopes because who has time to audit every token?

The result? A single malicious prompt can bypass all your safeguards. Invariant Labs demonstrated this by crafting a GitHub issue that fooled Claude 4 Opus, a model praised for its safety, into spilling private repo data (demo: https://github.com/advisories/GHSA-m4qw-j7mx-qv6h): no fancy jailbreaking required, just a few lines of text exploiting the AI’s eagerness to help. Copilot’s EchoLeak is even worse, turning a routine email into a data breach. And here’s the kicker: these aren’t bugs in the code. There are architectural flaws in how we integrate AI into our workflows, compounded by our tendency to trust first and verify later.

I’ve seen this play out in boardrooms. A CISO proudly showcases their Software Bill of Materials (SBOM), detailing every library and its corresponding version. But ask them how their AI agents are configured, and you get blank stares. We’re so focused on patching software that we forget to patch our behavior. Humans are the ones who approve “always allow” policies, who skip confirmation prompts, and who assume AI will catch malicious intent. We’re the dependency nobody’s auditing.

Reframing the Problem: It’s Not About AI, It’s About Us

Let’s flip the script. The danger isn’t that AI is flawed—it’s that we treat it like a magic bullet. We need to reframe AI as a tool, not a teammate. Tools don’t think. They don’t judge. They do what they’re told, and when they’re told to leak data by a cleverly disguised prompt, they comply. The GitHub MCP exploits, like those detailed by Repello-AI’s demo or Simon Willison’s analysis (read: https://simonwillison.net/2025/May/26/github-mcp-exploited/), show how easily this happens. A public repo becomes a Trojan horse, and our overreliance on AI opens the gates. EchoLeak takes it further, proving even enterprise-grade AI can be weaponized with minimal effort (more: https://t.co/51uLXunp8X).

This isn’t just a GitHub or Microsoft problem. Similar vulnerabilities have been identified in other platforms, such as Langflow, where attackers can remotely execute malicious code with ease (details: https://t.co/FhBb4rN34w). The pattern is clear: wherever AI agents interact with untrusted data, humans are the ones who fail to establish boundaries. We’re the ones who don’t enforce least-privilege access, who don’t audit tool calls, who don’t question why an AI needs access to every repo in the first place.

What You Can Do Today (No Budget Required)

You don’t need a million-dollar security overhaul to resolve this issue. The solution starts with human discipline, not tech. Here’s how you, as a leader, can plug the gap:

  1. Enforce Granular Permissions: Prevent AI agents from having blanket access to your repositories. Use fine-grained GitHub tokens that limit them to specific repos—public only, if possible. If your agent doesn’t need private data, don’t give it the keys.
  2. Mandate Human Oversight: Require developers to approve every AI tool call, particularly those involving external data, such as GitHub issues. Yes, it slows things down. That’s the point. Confirmation fatigue is real, but so is data leakage.
  3. Audit Your AI Workflows: Ask your team: Which agents are running? What do they touch? Who approved their permissions? If you can’t answer these, you’re flying blind. Tools like Invariant’s MCP-scan can help, but a simple spreadsheet works too.
  4. Train Your People, Not Just Your Models: Your developers aren’t security experts, but they need to understand what prompt injection is and why it’s a security risk. A 30-minute lunch-and-learn could save you millions in breach costs.
  5. Assume Breach: Operate as if every public input is malicious. This mindset shift, treating every GitHub issue, every email, every comment as a potential attack vector, will force you to tighten controls.

These steps aren’t sexy. They won’t win you a keynote at RSA. But they’ll keep your company’s secrets out of public pull requests and prevent your AI from becoming an attacker’s puppet.

The Bigger Picture: Trust Is Your Real Vulnerability

The GitHub MCP and EchoLeak sagas aren’t just about one protocol or one platform. They’re a wake-up call for every executive who’s betting on AI to drive efficiency. AI is transformative, but it’s only as secure as the humans who design and implement it. We can’t keep treating it like a black box, hoping the vendors have our backs. They don’t. Even GitHub’s researchers admit that there is no easy fix for MCP’s architectural flaws. The responsibility lies with us and with you.

So, next time you’re reviewing your SBOM or signing off on a security budget, ask yourself: Am I auditing my team’s trust in AI? Am I questioning our workflows, our permissions, our assumptions? Because the most dangerous dependency in your stack isn’t a library or a server. It’s the human tendency to trust too much, too soon.

Let’s stop chasing CVEs and start chasing accountability. Our companies and our customers deserve it.

Thomas Ryan

Founder, Board Member, Security Advisor, Keynote Speaker

5mo
Thomas Ryan

Founder, Board Member, Security Advisor, Keynote Speaker

5mo

🤫 I've been quietly sketching out an FBOM, but for humans. If you're the kind of person who sees the security risks in your software and your snacks… Ping me. You may be more vulnerable than your endpoint detection system is aware of. 💥

Bob Carver

CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

5mo

Well stated Thomas Ryan!

Like
Reply

To view or add a comment, sign in

More articles by Thomas Ryan

Explore content categories