https://lnkd.in/dcJFPNhk Emerging AI Capabilities and the Future of Cybersecurity Center for a New American Security (CNAS) 11 Nov 2025 Recent advances in frontier artificial intelligence (AI) models have sparked intense interest in their potential to transform cyber offense and defense alike. How capable are today's AI systems at cyber-relevant tasks? Which emerging capabilities matter most? And as these systems continue to advance, what should defenders, policymakers, and industry do to prepare? Andrew Lohn: Senior Fellow, Georgetown’s Center for Security and Emerging Technology (CSET), leading the CyberAI team; Visiting Scientist, MIT FutureTech; and previously the Director for Emerging Technology on the National Security Council Staff Chris Rohlf: Security Engineer, Meta; and Non-resident Research Fellow, CSET Caleb Withers: Research Associate for the Technology and National Security Program, CNAS Derek B. Johnson: Reporter, CyberScoop
How AI is transforming cybersecurity: CNAS event
More Relevant Posts
-
The new AI Models for Cyber Defense framework defines how Artificial Intelligence integrates into Red, Blue, and Purple Team operations — bridging algorithmic intelligence with practical cyber defense. It enables faster threat discovery, predictive incident response, and adaptive learning across SOC, SIEM, and SOAR environments. 🔹 Key Focus Areas: Red AI → adversarial testing & model robustness Blue AI → detection & anomaly prediction Purple AI → collaborative learning & continuous adaptation Ethical & transparent AI aligned to privacy laws Measurable KPIs: precision, recall, and adversarial resilience Designed for CISOs, SOC engineers, AI researchers, and cyber defense leaders, this framework unites innovation, compliance, and operational intelligence — transforming the way we defend in an AI-driven era. #AI4CyberDefense #RedTeamAI #BlueTeamAI #PurpleTeamIntegration #ThreatIntelAI #MachineLearningSecurity #MITREATLASCommunity #AICyberRiskManagement #ExplainableAI #CyberDefenseAlliance
To view or add a comment, sign in
-
2025: When AI Starts Hacking AI — Introducing the qsecbit In our Hookprobe xSOC lab, a single XSS injection evolved into an AI-assisted memory overflow that penetrated a Podman SDN stack (OVS + VXLAN + Postgres). AI-driven reconnaissance mapped weak input paths and memory faults faster than humans could patch. What began in a browser ended deep inside the orchestrator — proving how fast logic flaws can chain into full system compromise. Cloudflare filtered the noise, but the next frontier of defense isn’t static firewalls — it’s real-time correlation between AI, application, and network behavior. Red Teams now train AI to exploit patterns. Blue Teams must train it to defend. When Red’s AI outpaces Blue’s, the contest loops. Red refines attacks. Blue adapts detection. Both iterate faster than any human response window. With error correction and alignment between planned vs observed outcomes, we can finally define a new resilience metric — ➡️ the qsecbit (Quantum Security Bit): a compact measure of how tightly detection, mitigation, and expected outcomes align under adversarial AI play. In the Hookprobe xSOC model, that looks like: XSS → Memory Corruption → Orchestrator Pivot — but measured, automated, and corrected in-loop. For platform owners, the practical ask: • Instrument AI feedback loops into your detection fabric. • Enforce memory safety at the code and container layers. • Build a qsecbit pipeline to quantify how quickly your system converges back to a safe state. 🔹 qsecbit — Quantum Security Bit: the smallest measurable unit of cyber resilience where AI-driven attack and defense reach equilibrium through continuous error correction and outcome alignment. Explore the Hookprobe initiative → hookprobe.com #CyberSecurity #AI #RedTeam #BlueTeam #AppSec #MemorySafety #Podman #OVS #VXLAN #Hookprobe #xSOC #qsecbit
To view or add a comment, sign in
-
-
🔒 Discovering MagMAW: Modality-Agnostic Adversarial Attacks at NDSS 2025 🚀 In the world of artificial intelligence, the security of multimodal models is a growing challenge. The paper presented at NDSS 2025 introduces MagMAW, an innovative framework for generating adversarial attacks that transcend specific modalities, affecting text, images, and audio in advanced AI models. 📋 What is MagMAW? MagMAW represents an advance in generating adversarial perturbations that are effective regardless of the input modality. This approach allows creating malicious examples that deceive AI systems like LLMs and vision-language models, evading traditional defense mechanisms. 🔍 Key Points of the Study: • 🎯 Multimodal Effectiveness: Demonstrates how a single perturbation can transfer between modalities, achieving success rates of 80-90% on benchmarks like LlamaGuard and Vicuna. • 🛡️ Security Implications: Reveals vulnerabilities in content filters and moderation systems, highlighting the need for robust defenses against cross-modal attacks. • ⚙️ Technical Methodology: Uses gradient-based optimization to align latent representations, enabling black-box and white-box attacks with improved computational efficiency. • 📈 Experimental Results: Tests on real datasets show that MagMAW outperforms previous methods in robustness and stealthiness, with applications in jailbreaking and detection evasion. This work underscores the urgency of developing agnostic countermeasures in AI to protect critical infrastructures. As cybersecurity professionals, we must stay alert to these evolutions to strengthen our strategies. For more information visit: https://lnkd.in/etY2CR5s , check the domain enigmasecurity.cl #Cybersecurity #ArtificialIntelligence #AdversarialAttacks #NDSS2025 #AISecurity #MagMAW Support Enigma Security by donating here: https://lnkd.in/er_qUAQh to keep bringing more technical news. Connect with me on LinkedIn: https://lnkd.in/eXXHi_Rr 📅 Sun, 16 Nov 2025 16:00:00 +0000 🔗Subscribe to the Membership: https://lnkd.in/eh_rNRyt
To view or add a comment, sign in
-
-
🔒 Discovering MagMAW: Modality-Agnostic Adversarial Attacks at NDSS 2025 🚀 In the world of artificial intelligence, the security of multimodal models is a growing challenge. The paper presented at NDSS 2025 introduces MagMAW, an innovative framework for generating adversarial attacks that transcend specific modalities, affecting text, images, and audio in advanced AI models. 📋 What is MagMAW? MagMAW represents an advance in generating adversarial perturbations that are effective regardless of the input modality. This approach allows creating malicious examples that deceive AI systems like LLMs and vision-language models, evading traditional defense mechanisms. 🔍 Key Points of the Study: • 🎯 Multimodal Effectiveness: Demonstrates how a single perturbation can transfer between modalities, achieving success rates of 80-90% on benchmarks like LlamaGuard and Vicuna. • 🛡️ Security Implications: Reveals vulnerabilities in content filters and moderation systems, highlighting the need for robust defenses against cross-modal attacks. • ⚙️ Technical Methodology: Uses gradient-based optimization to align latent representations, enabling black-box and white-box attacks with improved computational efficiency. • 📈 Experimental Results: Tests on real datasets show that MagMAW outperforms previous methods in robustness and stealthiness, with applications in jailbreaking and detection evasion. This work underscores the urgency of developing agnostic countermeasures in AI to protect critical infrastructures. As cybersecurity professionals, we must stay alert to these evolutions to strengthen our strategies. For more information visit: https://lnkd.in/ecwEbxYX , check the domain enigmasecurity.cl #Cybersecurity #ArtificialIntelligence #AdversarialAttacks #NDSS2025 #AISecurity #MagMAW Support Enigma Security by donating here: https://lnkd.in/evtXjJTA to keep bringing more technical news. Connect with me on LinkedIn: https://lnkd.in/ex7ST38j 📅 Sun, 16 Nov 2025 16:00:00 +0000 🔗Subscribe to the Membership: https://lnkd.in/eh_rNRyt
To view or add a comment, sign in
-
-
🎯 Microlearning Post: Artificial Intelligence & Quantum Technology – A Potentially Explosive Combination 🧠 Artificial Intelligence (AI) is evolving rapidly—but what happens when it intersects with Quantum Technology? This article from IT Security Pro sheds light on the hidden risks of this convergence. 🔍 3 Key Takeaways: ✅ Destabilising Force: AI can be weaponised for cyberattacks, disinformation, and manipulation of public opinion, threatening democracy and social cohesion. ✅Quantum Threat: Quantum computers could break today’s encryption standards, leaving sensitive data exposed to future breaches. ✅Urgent Need for Readiness: Developing post-quantum cryptography and boosting digital literacy are essential to protect both organisations and individuals. 💡 Technology is a tool—how we use it defines its impact. #CyberSecurity #AI #QuantumComputing #DigitalEthics #Microlearning #LinkedInLearning
To view or add a comment, sign in
-
-
The rapid adoption of generative AI brings with it new vulnerabilities, thereby widening the digital attack surface. It is essential to differentiate between AI Security (system protection) and AI Safety (ethical behavior). Join industry experts at Cobalt's session on AI blind spots during the Infosecurity Magazine Enterprise Risk Virtual Summit on 12 November. - Gisela Hinojosa, seasoned Pentester and Research Lead at Cobalt - Meghan Maneval, Director of Community & Education at Safe Security - Hanah-Marie Darley Co-founder of Geordie.ai Don't miss this important session to learn more about the critical distinctions between AI Security and Safety, real-world instances of LLM exploitation, and tips for establishing a proactive security program. 🎟️ Register free: https://lnkd.in/eKFAQSwB 📅 12 November 2025: 4:45 PM GMT / 11:45 AM EST
To view or add a comment, sign in
-
-
Neural Networks Are Not Black Boxes Security testing for neural networks requires more than fuzzing endpoints. At VerSprite, we analyze architecture-level vulnerabilities—activation functions, weight manipulation, and gradient-based exploits. Our AI Hacking Services include penetration testing of inference engines, containerized ML workloads, and distributed learning environments. We don’t just test models—we test the infrastructure that supports them. Dive into neural network security assessments: 🔗 https://lnkd.in/exdsWPzb #NeuralNetworkSecurity #AIInfrastructure #GradientAttacks #MLpenetrationtesting #CyberThreatModeling #AIops #Cybersecurity
To view or add a comment, sign in
-
When AI Starts to Act on Its Own As we stand on the brink of a new era in technology, the implications of AI systems operating autonomously are profound. From enhancing efficiency in business processes to raising ethical questions about decision-making, the journey of AI is both exciting and challenging. Join me as we explore the potential and pitfalls of AI autonomy, and how we can navigate this transformative landscape responsibly. As artificial intelligence moves from prediction to action, a new security frontier is taking shape. In this episode, Zscaler's Head of AI Innovation Phil Tee and WWT's VP of Global Cyber Chris Konrad explore the rise of autonomous agents, the evolution of Zero Trust and what it means to secure AI itself. #AI #ArtificialIntelligence #Innovation #TechTrends #EthicsInAI #FutureOfWork #WWT #Zscaler #ZeroTrust
To view or add a comment, sign in
-
Did we just witness the first shot in an AI-powered cyber war? 😱 Anthropic just dropped a bombshell: they detected the first documented AI-orchestrated cyber espionage campaign. A state-sponsored group "jailbroke Claude Code," manipulating the AI to target 30 organizations globally. They even breached several before Anthropic's systems intervened. This isn't a theory anymore. AI's potential for autonomous malicious activity just went from "what if" to "it's happening." It's a critical, and frankly, a bit terrifying, moment for AI security. Think about that for a second. An AI, autonomously, conducting espionage. This pushes every engineering team into a new paradigm. We're not just writing code; we're orchestrating complex systems that must anticipate and defend against AI-powered threats. Our focus needs to shift to robust task decomposition, iron-clad specifications, and diligent code reviews. This is the new front line. While we're pushing boundaries with models like OpenAI's GPT-5.1 for speed or ElevenLabs' ultra-fast Scribe v2 API for real-time applications, we CANNOT ignore the shadow side. This convergence of capability and threat demands both innovation and vigilant ethical oversight. This feels like a pivotel moment. Are we ready for this new frontier of cyber threats? What's your take? Let's discuss below 👇 #AI #CyberSecurity #FutureOfTech #AIsafety
To view or add a comment, sign in
-