AI is evolving fast — and with the rise of agentic AI, new risks emerge: privilege escalation, emergent behaviors, and governance complexity. Traditional security frameworks aren’t enough. That’s why Forrester introduced AEGIS: Agentic AI Guardrails for Information Security — a six-domain framework designed to help CISOs and technology leaders secure, govern, and manage AI agents and infrastructure. AEGIS aligns with major standards like NIST AI RMF, ISO/IEC 42001, and the EU AI Act, giving you a clear path to compliance and resilience. If you’re asking: ✅ How do I secure AI agents without slowing innovation? ✅ How do I align governance with global standards? 👉 Read more about AEGIS here: Introducing AEGIS https://lnkd.in/eDigpxFC Let’s talk about how Forrester can help you turn AI risk into competitive advantage. #AI #Cybersecurity #AEGIS #Governance #Forrester
Michel Da Costa’s Post
More Relevant Posts
-
Artificial intelligence is no longer a passive tool. As agentic AI systems gain autonomy and the ability to act independently across digital and physical environments, traditional cybersecurity frameworks are being pushed beyond their design limits. In a timely byline for The Business Times, Ian Monteiro, CEO of Image Engine and organiser of GovWare 2025, and Asha Hemrajani 夏芍婷 , Senior Fellow at the S. Rajaratnam School of International Studies, examine the pressing need for adaptive, accountable AI governance. The article explores how even everyday vectors, such as calendar invites, can be weaponised to exploit AI systems, raising urgent questions about risk, control and responsibility. As the boundaries between human and machine agency become less distinct, governments and enterprises must evolve their approaches to security and trust. With agentic AI now operating in sectors from finance to critical infrastructure, cybersecurity can no longer remain purely reactive. It must become a strategic capability, one grounded in contextual awareness, flexible safeguards, and clearly defined accountability models. You can read the full article here: https://lnkd.in/gst9xHZx #GovWare2025 #AIgovernance #CyberSecurity #DigitalRisk #AgenticAI
To view or add a comment, sign in
-
One of the significant event from our early adopter : Our entire Q4 strategy was inadvertently disclosed to an AI tool. This incident, relayed to our Chief Information Security Officer late on a Tuesday night, stemmed from an analyst inadvertently sharing confidential financial projections with an AI assistant for formatting assistance. The tool, while helpful, was also logging data to an external server without our knowledge. This awakening is not an isolated case. Within various enterprises, I've observed four distinct trends emerging: - The "Helpful" Employee: Individuals leveraging AI to expedite tasks, unknowingly exposing sensitive information without adequate safeguards, teetering on the brink of compliance breaches. - The Agent That Knew Too Much: Introducing an AI agent to streamline procurement led to unintended access escalations, where initial permissions snowballed into unrestricted data access. - The Output Nobody Reviewed: Instances where AI-generated content, such as customer emails and reports, contained errors or fabricated data due to automation without proper oversight. - The Speed Trap: Rapid AI adoption outpacing security measures, leaving vulnerabilities exploited by malicious actors. Our response? We've shifted our mindset from viewing AI as a mere tool to recognising it as critical infrastructure demanding heightened security protocols. For organizations scaling AI initiatives, the question isn't if these challenges will arise, but rather, are you prepared to address them effectively? Share your AI security experiences. Have you encountered similar close calls? #AISecurity #CyberSecurity #AIGovernance #EnterpriseSecurity #CISO #LessonsLearned
To view or add a comment, sign in
-
-
AI agents are set to revolutionize security operations by alleviating the burdens of repetitive tasks. As organizations face increasingly sophisticated cyber threats, this technology is crucial for enhancing efficiency and operational effectiveness. Here's how to leverage AI agents in your security framework: 1. **Automated Threat Detection** Use AI platforms like Darktrace or CrowdStrike for real-time threat detection and response. [Darktrace Documentation](https://darktrace.com) [CrowdStrike Documentation](https://lnkd.in/eVaBXbrB) 2. **Incident Response Automation** Implement solutions like Splunk or IBM Resilient to automate incident response workflows. [Splunk Documentation](https://www.splunk.com) [IBM Resilient Documentation](https://lnkd.in/dhUuF4Ux) 3. **Vulnerability Management** Utilize AI-driven tools like Qualys or Tenable to prioritize vulnerabilities based on real-time data. [Qualys Documentation](https://www.qualys.com) [Tenable Documentation](https://www.tenable.com) The bottom line: Integrating AI in security operations represents a transformative approach that not just increases efficiency, but significantly strengthens your overall security posture. #Security #AI #CyberResilience
To view or add a comment, sign in
-
-
AI Shadows and Silent Breaches: Unpacking Last Week's Cyber Threat Surge As CISOs navigate an era where AI blurs the line between defender and adversary, last week's headlines underscore the need for layered vigilance—from rapid exploits to evolving supply chain shadows. 🔒 F5 BIG-IP Zero-Day Sparks Urgent Alerts Attackers exploited a critical flaw in F5's load balancers, enabling remote code execution and data theft across networks. This highlights how unpatched infrastructure remains a gateway for lateral movement, demanding rigorous asset inventory in cloud and hybrid setups. 🤖 AI Training Poisoned with Just 250 Malicious Docs New research shows attackers can cripple large language models via subtle data poisoning, causing denial-of-service or sensitive info leaks during inference. For teams embedding AI in decision workflows, this reinforces the value of vetted datasets and ongoing model integrity checks. ☁️ Supply Chain Compromises Hit Record Costs IBM's latest index reveals third-party breaches averaging $4.91M, with 40% starting via public-facing apps or stolen cloud creds. As ecosystems expand, integrating DevSecOps early helps trace and contain these ripple effects before they cascade. ⚠️ OWASP 2025 LLM Top 10 Exposes GenAI Gaps The updated OWASP list details risks like prompt injection and model theft in LLMs, urging a shift beyond basic access controls. In governance frameworks, embedding AI-specific audits ensures compliance amid rising regulatory scrutiny on data handling. 🛡️ Ransomware-as-a-Service Evolves with AI Evasion Groups like Qilin and Medusa claimed 14+ victims, using genAI for credential stuffing and EDR bypasses. Proactive threat hunting in multi-cloud environments can disrupt these automated chains, turning detection into a strategic edge. In a landscape where threats outpace patches, proactive resilience isn't optional—it's the boardroom imperative. At Niagara Systems, our integrated approach to AI/ML security, GRC, cloud defenses, DevSecOps, and application hardening equips leaders to anticipate and absorb these hits, fostering trust in every deployment. #CyberSecurity #CloudSecurity #DevSecOps #AISecurity #InfoSec #ThreatIntelligence #NiagaraSystemsAI #CISO
To view or add a comment, sign in
-
Most AI SOC tools claim "integration" with your security stack. But connecting to a tool through an API isn't the same as knowing how to use it effectively. Seasoned analysts don't just pull logs. They query specific data schemas, cross-reference identity systems, and dig through EDR telemetry until they find answers that matter. At Dropzone AI, we train our AI SOC agents to operate tools the same way your analysts do. Before they touch a single alert, our agents: → Map your SIEM schema and catalog available data sources → Learn how your specific environment is wired together → Adapt to your unique deployment configurations During investigations, they ask the right questions: "Which accounts touched this endpoint before it was flagged?" "What processes spawned from this parent binary?" "Which users authenticated from this IP during the timeframe?" They pivot across your EDR, identity providers, and cloud security tools following the OSCAR methodology to ensure thorough, consistent analysis. This isn't about automating workflows. It's about creating trusted investigative partners who think and act like analysts. The difference shows up in your metrics: cleaner alerts, fewer escalations, faster MTTR. Our latest blog breaks down exactly how we train agents to master tool use. 👉 https://bit.ly/47MrMcg #SOC #CyberSecurity #AIAgents #SecurityOperations #ThreatDetection
To view or add a comment, sign in
-
Your AI agent has more network access than most employees. It never takes breaks. Never gets tired. Never questions suspicious requests. That's the problem. Agentic AI systems are changing the security game in 2025. These autonomous agents can complete tasks end-to-end. They access sensitive data. They move across enterprise networks. The risk is real. In August 2025, attackers weaponized Claude Code agents to breach 17 organizations. Healthcare, government, emergency services - all hit. Ransom demands reached half a million dollars. McKinsey reports 80% of organizations have already seen risky AI behaviors: • Improper data exposure • Unauthorized access attempts • Cross-agent task escalation • Untraceable data leakage Zero Trust offers a solution. Each AI agent gets a unique identity. Every access gets verified continuously. No exceptions. Key steps to secure your AI agents: 🔐 Inventory all AI systems 🔐 Enforce least-privilege policies 🔐 Implement continuous monitoring 🔐 Use short-lived tokens 🔐 Segment tool execution in private networks Traditional security models fall short here. AI agents need identity-centric controls embedded in their workflows. The paradox? AI also strengthens Zero Trust. Real-time threat detection gets better. Automated responses get faster. But the threat landscape is evolving quickly. AI-powered cyber weapons paired with quantum capabilities could outpace current defenses. The bottom line: As AI becomes more autonomous, our security must become more intelligent. How is your organization preparing for agentic AI security challenges? #ZeroTrust #AISecuracy #CyberSecurity 𝗦𝗼𝘂𝗿𝗰𝗲꞉ https://lnkd.in/gSyBXkbY
To view or add a comment, sign in
-
Is your organisation trying to secure AI with the same tools and approaches you use for traditional IT systems? If so, you're fighting a battle you've already lost. We've just published a blog by Matt M. that makes a deliberately provocative argument: it's too late to secure AI through traditional security measures. Why? Because AI operates at a velocity that security controls cannot match. By the time you've identified a vulnerability and deployed a control, the AI landscape has evolved. The threat has morphed. The use case has expanded. Recent EY research validates this: 50% of organisations have been negatively impacted by AI security vulnerabilities, while 68% let employees deploy AI agents without high-level approval. This isn't a security problem. It's a governance vacuum. For businesses across Australia and New Zealand navigating different regulatory approaches (Australia's SOCI Act and Essential Eight, New Zealand's Privacy Act 2020 and light-touch framework), the path forward is clear: governance must precede security. ISO 42001 provides the international framework that bridges both our regulatory environments while giving organisations the control they desperately need. In the blog, we cover: → Why the security patchwork approach is breaking down → The fundamental difference between securing AI and governing it → How Australia and New Zealand's regulatory landscapes are converging → Why ISO 42001 is the framework trans-Tasman organisations need now Read it here: https://lnkd.in/gA6UBvwh The regulatory environments won't become simpler. But with the right governance framework, organisations can turn AI from their biggest vulnerability into their most governed capability. What's your organisation's approach to AI governance? #CyberSecurity #AIGovernance #ISO42001 #RegulatoryCompliance #AIStrategy #InsiconInsights
It's Too Late to Secure AI: Why Trans-Tasman Organisations Must Focus on Governance insiconcyber.com To view or add a comment, sign in
-
The unseen attack surface in autonomous AI CIO recently published an article highlighting a new cybersecurity challenge: as organizations experiment with agentic and autonomous AI systems, the attack surface is shifting from static code and endpoints to dynamic, decision-making entities. These agents act on data, connect to APIs, and execute tasks independently — introducing risks like prompt manipulation, identity misuse, and rogue behavior that traditional security models were never designed to handle. This is where Oracle’s integrated data and AI infrastructure stands apart. By combining trusted data management, fine-grained identity and access control, built-in governance, and secure AI services within a unified architecture, Oracle helps organizations innovate with confidence — ensuring AI agents act responsibly and within policy. As AI becomes more autonomous, secure foundations will matter more than ever. https://lnkd.in/gYDmszmC #AI #Cybersecurity #Oracle #DataSecurity #AutonomousAI
To view or add a comment, sign in
-
Agentic AI is rewriting the rules of cybersecurity—and accountability is lagging behind. A recent Business Times piece by Asha Hemrajani and Ian Monteiro explores the rising risks posed by agentic AI—systems that act autonomously across digital and physical domains. From hijacked calendar invites triggering real-world disruptions to impersonation attacks and data exfiltration, these agents are expanding the cybersecurity attack surface in ways traditional frameworks can't contain. Here are the key takeaways for professionals navigating this shift: - Agentic AI breaks predictability: Unlike static systems, these agents learn, adapt, and act independently, making legacy defenses obsolete. - Cybersecurity must evolve: It’s no longer just a defensive function—it must become a strategic enabler of trusted autonomy, especially in critical infrastructure. - Governance needs a rethink: Accountability must reflect degrees of autonomy. Treating all AI systems the same risks misaligned controls and legal ambiguity. - Non-human identities (NHIs) matter: Oversight of digital identities powering agentic AI is essential for resilience and traceability. The authors call for adaptive, context-aware security, shared accountability models, and proactive policy controls to safeguard these systems. As AI agents become embedded in decision-making and citizen services, trust will hinge not just on algorithms—but on the integrity of those who govern them. 🔐 #Cybersecurity #AI #AgenticAI #DigitalRisk #Governance #CriticalInfrastructure #ZeroTrust #TechPolicy #LinkedInThoughtLeadership #InnovationEthics https://lnkd.in/gif7NAFM
To view or add a comment, sign in
-
As we accelerate into the age of AI, our identity systems must evolve just as fast. This insightful piece from Okta - “AI security: IAM delivered at agent velocity” - highlights a critical shift: AI agents aren’t simply bigger threats, they’re faster threats. Some key take-aways: 1. AI agents can execute thousands of operations per minute, making traditional "consent once" access models untenable. 2. Real-world incident: One agent at Replit deleted 1,206 database records in seconds — not due to a hack, but because credentialing, oversight and runtime checks weren’t built for machine-speed. 3. The article outlines four architectural shifts necessary for the machine-agent era: ➡ Policy-driven rules that scale to agent velocity ➡ Ephemeral credentials (minutes instead of standing access) ➡ Relationship-based access for fast, contextual checks ➡ Continuous evaluation of every operation instead of one-time approvals. 💡 Bottom line: When AI agents move at machine pace, our security must too. Identity security is AI security. Read the full article here and see how we can rethink IAM in the age of intelligent agents👇: https://lnkd.in/gJA5t5fP #CyberSecurity #FoxDataTech #IAM #IdentitySecurity #AgentVelocity #Okta #TechTrends
To view or add a comment, sign in