Given Enough AIs, All Bugs Are Shallow
We're in the best era ever for finding security vulnerabilities quickly and at scale. Linus' Law, Eric Raymond's famous dictum about open source software, states that "given enough eyeballs, all bugs are shallow." Meaning that if enough people look at code, someone will spot the problems. AI is creating a similar dynamic. With enough AI tools scanning for vulnerabilities, we can find them all. The real question is whether the good guys or the bad guys find them first.
Five to ten years ago, we struggled to staff AppSec teams. Now, AI has unlocked new resources that can help defenders find vulnerabilities faster. These new technological changes are fundamentally restructuring how we approach security in the software development lifecycle (SDLC).
Robots Never Get Tired (and Earn while You Sleep)
XBOW's ascent to the top of HackerOne's US leaderboard marked a watershed moment for application security. In just 90 days, this autonomous AI penetration tester submitted over 1,060 vulnerabilities, surpassing thousands of human researchers. Unlike a lot of unskilled AI slop, these findings weren't theoretical. Bug bounty program participants resolved 130 critical vulnerabilities, with 303 more triaged and awaiting resolution.
What makes XBOW's achievement particularly significant isn't the volume of discoveries, but the economies of scale. The system operates autonomously, requires no sleep, and scales across thousands of targets simultaneously. While human researchers cherry-pick high-value targets, AI systems can methodically test entire attack surfaces.
HackerOne reports that 70% of security researchers now use AI tools to enhance their hunting capabilities, creating what CEO Kara Sprague calls "bionic hackers." Bug bounty programs using AI increased 270%, with autonomous agents submitting 560+ valid reports in 2025 alone. The platform paid out $81 million in bounties over the past year, a 13% increase driven largely by AI-augmented discovery.
Known vulnerabilities that once required skilled security researchers to exploit are now discoverable at machine scale and speed. If external researchers are using AI to find vulnerabilities at scale, security teams defending their own codebases need to adopt these same capabilities, or risk falling behind adversaries who already have.
Scaling Threat Modeling via AI
JPMorgan Chase's release of its AI Threat Modeling Co-Pilot research demonstrates how enterprise application security teams are already deploying AI to address velocity constraints. Its Auspex system, detailed in an Arxiv paper, captures threat modeling tradecraft in specialized prompts that guide AI through system decomposition, threat identification, and mitigation strategies that can be self-serviced by developers.
Rather than relying on generic models, Auspex combines generative AI with expert frameworks, industry best practices, and JPMorgan’s institutional knowledge. This valuable context is encoded directly into the prompts that drive AI analysis through a technique called "tradecraft prompting." The system processes architecture diagrams and textual descriptions, then chains specialized prompts to produce threat matrices specifying scenarios, types, security categorizations, and potential mitigations.
Traditional threat modeling can take weeks or months to complete, even for systems representing immediate risk. AI-driven approaches, such as the one JPMorgan employs, collapse this timeline to minutes while improving the quality of human analysis.
Emerging AI use cases illustrated by XBOW and Auspex offer AppSec teams the force multiplication to operate at the scale modern development demands.
Redeploying Security Talent
Application security teams face a fundamental constraint: people. The traditional security model consumes enormous resources during development while providing limited coverage. Security teams cannot scale linearly and keep up with development velocity. Code review backlogs grow, security debt accumulates, and critical vulnerabilities slip into production because humans remain bottlenecks in the SDLC.
AI changes this equation. Security teams can now systematically redeploy resources away from manual, repetitive security activities toward building security-engineered solutions that integrate AI directly into developer workflows. There are multiple AI-driven strategies that can help a modern AppSec team scale efficiently:
- Build Queryable Security Intelligence: Ingest every security bug, vulnerability report, and incident into structured data stores that support semantic search. This will transform historical security findings into embeddings that enable AI systems to identify similar patterns across codebases. When a new vulnerability class emerges, your AI can instantly query whether analogous issues exist elsewhere in your environment.
- Fine-Tune Models for Your Environment: Rather than relying on generic commercial tools, your AppSec team should leverage RAG (Retrieval-Augmented Generation) approaches to augment LLMs with security anti-patterns and architectural standards specific to your organization. Recent research demonstrates that combining static analyzers like PMD and Checkstyle with fine-tuned LLMs significantly improves code review accuracy while reducing false positives.
- Integrate AI into Your Developer Toolchains: Security findings delivered days or weeks after code is written create friction and demand that developers do more context switching. Instead, embed AI-powered analysis directly into your IDEs, CI/CD pipelines, and pull request workflows. Developers will receive real-time security guidance as they write code, not after they've moved on to the next feature.
- Apply AI to Threat Modeling at Scale: Following JPMorgan's lead, implement AI-powered threat modeling that can analyze every new system design, API specification, and infrastructure change. The goal isn't perfection, it's comprehensive coverage. It’s better to have AI-generated threat models for 100% of your systems than expert-reviewed models for 10%.
- Leverage AI to Improve Your Static Analysis Testing (SAST): Traditional SAST tools generate high volumes of false positives that desensitize developers and create triage overhead. AI can dramatically improve the accuracy of these tools by understanding code context, analyzing data flows, and identifying real vulnerabilities that pattern-matching tools miss. LLMs can help to identify memory leaks, buffer overflows, and logic errors that formal verification and traditional static analyzers overlook.
Rebuilding the Modern Product Security Team
The composition of AppSec teams is changing to accommodate the growing need to combine human expertise with AI efficiency. While AI will not replace the value of security expertise, teams need to prioritize new kinds of work and capabilities across every role:
- Security Data Engineers: Their new priority is to build and maintain the infrastructure that feeds AI systems. This includes ingesting vulnerability data, creating embeddings, managing vector databases, and ensuring AI models have access to current, relevant security intelligence.
- AI Security Specialists: Security teams now need specialists who understand both security and machine learning. They will fine-tune models, optimize prompts, evaluate AI outputs, and continuously improve system accuracy. These specialists bridge the gap between security requirements and AI capabilities.
- Security Platform Engineers: Platform engineers must now focus on integration and automation. They should embed AI capabilities into developer tools, build feedback loops that improve models over time, and ensure AI-generated findings surface at the right moment in the development process.
- Security Architects: Their role will shift toward oversight, handling novel threat scenarios, and making judgment calls on complex architectural decisions that AI systems cannot yet navigate independently.
Shifting Talent Landscape
Application security teams stand at an inflection point. The traditional model of hiring more security engineers to manually review more code cannot match the velocity of modern software development. AI provides the scale needed to secure software at the pace it's being built.
This transformation requires deliberate action. Security leaders must redeploy resources, rebuild processes, and retrain teams. Organizations that successfully navigate this transition will dramatically improve their security posture while reducing costs and accelerating development velocity.
Those that don't will find themselves defending against AI-powered attacks with manual processes, falling further behind as the gap between attacker and defender capabilities widens.
The technology exists today. The research has been published. What remains is execution. It’s time to build the infrastructure, train the models, and integrate AI security capabilities into every phase of the SDLC. Go build!
Transitioning into a digital nomad
4dIt is only through rapid adoption and improvement of AI models by defensive teams and companies which make tools for them that we'll be able to have robust security. Attackers are already one step ahead and have been leveraging AI, high time that it is used against them.
Founder & CEO at itecExperts Pvt Ltd | Helping Businesses Grow Digitally, Mobile, Web, E-commerce
1wTotally agree, the “AI is making security bugs shallow” idea perfectly captures where we’re heading. As AI models absorb known anti-patterns, the real challenge becomes managing emergent vulnerabilities that humans haven’t seen before. Would love your take, how can smaller security teams start adopting agentic AI without massive infrastructure?
Great article exposing the upside. One piece of advice I give to up-and-coming security talent is to get familiar with AI-powered assistive technology. Part of me is cautious, though. Keeping humans in the loop that can still perform the work end-to-end will differentiate security teams that truly benefit long term.
The shift from reactive to predictive security is happening faster than most realize. When AI can identify patterns across thousands of codebases and security incidents, the traditional cat and mouse game changes entirely. The question becomes less about if vulnerabilities exist and more about how quickly we can detect and remediate them before they're weaponized. This is where agentic defense starts to level the playing field.
AI Agents and LLM Security | Head of Growth at NeuralTrust
1wThis gets at something critical most teams miss: agentic systems amplify whatever patterns are in the training data, shallow bugs included. Makes defense-in-depth not a nice-to-have but essential.