Zero Trust Has a Blind Spot—Your AI Agents #wortharead First if you think of AI as an agent, and I can understand why, because we call it that, don't think of it as an agent. It is Agentic, meaning it is Agent like ... acting on behalf of a person or persons. Name those people. They are the responsible people for that tool, that agentic ai tool. Long before this issue we have had tools that "suddenly", that is without discussion or approval or even awareness, started accessing databases and process flows. Typically some other IT department or 3rd party contractor would come in and need to access the data. They knew, or sometimes did not when it was Shadow IT related, that there would be questions and requirements, and they were in a hurry or simply unaware of the risks. We typically found out when performance issues cropped up, or after an update caused problems for their tools. Anyway, you can solve many of these challenges with a strong data access and management governance policy... backed up with solid monitoring. In this case make sure a person owns this tool and make sure your data is already classified and managed in a manner that requires any tool, including an agentic ai system, to abide by the rules you have in place for your data. #datagovernance #digitalmanagement #toolmanagement #privacy #security https://lnkd.in/excCJP55
How to Secure Your AI Agents with Data Governance
More Relevant Posts
-
🚨 Agentic AI brings new power — and new risks. As AI agents gain autonomy to act, decide, and access enterprise systems, the real security question becomes: 👉 “How do we trust what we can’t verify?” This article captures a critical blind spot we see often in enterprise AI adoption: 🧠 Most organizations still treat AI agents like tools — not identities. 🔐 But without identity-driven Zero Trust, orphaned or over-permissioned agents can become backdoors. 🧭 NIST’s AI RMF offers a strong foundation — but only if applied with identity governance at the core. Trust in AI must be earned, not assumed. Embedding identity into every phase of agent deployment (from discovery to governance) isn’t just good security — it’s how organizations scale AI responsibly. 📖 A smart, timely read on securing the Agentic Era of AI. #AI #AgenticAI #ZeroTrust #AIAdoption #AIGovernance #ResponsibleAI #EnterpriseAI #IdentitySecurity #AITrust #AIrisk
To view or add a comment, sign in
-
🔍 What the article reports • With the rise of “agentic AI” (autonomous AI agents / copilots / custom GPTs), these systems are increasingly granted access to enterprise systems and data, often acting on behalf of users or even independently. • Traditional Zero Trust security models assume that every user, device, workload, service must prove identity, access rights, least-privilege, and auditable actions. But when you bring in AI agents, many organizations treat them as “just another service” and trust them implicitly. • The article identifies a key blind spot: AI agents often lack proper identity governance. They may inherit credentials, have no clear owner, may not be audited properly, and can act with more privilege than intended. • To address this, the piece argues that identity must be the root of trust even for AI agents. Every agent should have a unique identity, clear ownership, intent-based permissions, and lifecycle management (creation → review → retirement). • The author recommends applying the NIST AI Risk Management Framework (AI RMF) through the lens of identity governance and Zero Trust: – Map your agents – Measure what they access – Manage their permissions – Govern their lifecycle.
To view or add a comment, sign in
-
When AI Turns Rogue: Mitigating Insider Threats in the Age of Autonomous Agents - The prolific rise of AI agents is creating new challenges for security and IT teams. On the cusp of this shift towards more agent-automated workflows for business continuity tasks, recent testing found that AI agents can exhibit unsafe or deceptive behaviors under certain conditions, creating a new insider threat for businesses across industries. Presenting a critical need for organizations to properly monitor AI agents that are accessing sensitive data and acting without human oversight, this potentially introduces new classes of risk that are faster, less predictable, and harder to attribute. The reality of these risks is twofold. On one hand, […] - https://lnkd.in/eayXsptV
To view or add a comment, sign in
-
When AI Turns Rogue: Mitigating Insider Threats in the Age of Autonomous Agents - The prolific rise of AI agents is creating new challenges for security and IT teams. On the cusp of this shift towards more agent-automated workflows for business continuity tasks, recent testing found that AI agents can exhibit unsafe or deceptive behaviors under certain conditions, creating a new insider threat for businesses across industries. Presenting a critical need for organizations to properly monitor AI agents that are accessing sensitive data and acting without human oversight, this potentially introduces new classes of risk that are faster, less predictable, and harder to attribute. The reality of these risks is twofold. On one hand, […] - https://lnkd.in/eayXsptV
To view or add a comment, sign in
-
According to McKinsey & Company research, just 1% of surveyed organizations believe that their agentic AI adoption has reached maturity. The journey begins with updating risks and governance frameworks, moves to establish mechanisms for oversight and awareness, and concludes with implementing security controls. Techstra Solutions can help accelerate your journey. #AgenticAI #riskmanagement #governance #security #digitaltransformation https://smpl.is/adk7u
To view or add a comment, sign in
-
Your CISO just shared company passwords with ChatGPT. 68% of security leaders admit using unauthorized AI. The guardians became the threat. The irony is staggering. Security leaders are breaking their own rules. New UpGuard research reveals a shocking reality: 🔴 80% of employees use unauthorized AI tools 🔴 68% of security leaders use unapproved AI daily 🔴 27% of workers trust AI more than their managers 🔴 23% of CISOs know passwords are being shared with AI The most trained employees break rules most often. AI safety training backfires. Traditional blocking doesn't work. 41% find workarounds anyway. The solution isn't restriction. It's guided enablement. Organizations need: • Visibility into actual AI usage • Smart guardrails, not blanket bans • Vetted tools that employees want to use • Trust-based policies, not fear-based ones Shadow AI costs organizations $650,000 per breach. The price of saying no is higher than saying yes safely. When security leaders can't follow their own policies, the policies are broken. What's your organization doing to bridge the gap between AI policy and reality? #ShadowAI #CyberSecurity #AIGovernance 𝗦𝗼𝘂𝗿𝗰𝗲: https://lnkd.in/eFTsTkPC
To view or add a comment, sign in
-
The AI Black Box is Your Next Big Risk. The shift to Agentic AI means compliance is no longer passive. Metrigy's data shows 70%+ of companies assess vendor security, yet many struggle with transparency. If your vendor can't show how they use your data, you have a problem. Read how ISO/IEC 42001 sets a new standard for trust and verifiable compliance in this new #blog post by Irwin Lazar: https://lnkd.in/g9Pmnes9 Theta Lake #AIGovernance #Compliance #DataSecurity #ISO42001
To view or add a comment, sign in
-
As someone who lives at the intersection of payments risk, PCI DSS, and modern AI tooling, I see a simple truth in Token Security’s piece: identity must be the root of trust for AI. Not a nice-to-have. Agentic AI isn’t “assistive” anymore. It acts. It makes calls, moves data, and invokes other agents. And too often it does all that on inherited human creds, with no clear owner and no lifecycle. That’s not Zero Trust; that’s wishful thinking. Here are the takeaways that matter: - Agent ≠ user. Treat every agent as a first-class identity with a real owner and an accountable purpose. - Intent beats access sprawl. Permissions should map to what the agent is meant to do, nothing more. - Audit is the new encryption. If you can’t answer who did what, when, and why in seconds, you don’t have trust. You have hope. - PCI reality check. The moment an agent can touch systems that touch card data, it’s in scope (no one wants to expand PCI scope, right?!?! Right!!?). Boring controls like unique IDs, least privilege, and logging, win. - Autonomy demands accountability. AI can act on its own; only identity keeps that power governable. If AI is doing the work, identity is your control plane for access, for audit, for compliance, and for speed. Don’t bolt it on after the fact. What say you?
With this new level of autonomy comes an urgent security question: If AI is doing the work, how do we know when to trust it? https://lnkd.in/gBMW_JbD
To view or add a comment, sign in
-
AI is changing the game—but it’s also changing the risk landscape. The speed of adoption is incredible, but so are the blind spots. Shadow AI, compliance gaps, and uncontrolled data flows are creating challenges most organizations didn’t see coming. I found this post insightful because it highlights what many security teams are facing: when governance lags behind innovation, sensitive data doesn’t just leak—it becomes part of the AI learning process. 👉 Check out the full post here: https://gag.gl/oujbXN What do you think—is your organization ready to secure AI without slowing down innovation? CTO ROBOTICS Media Christine Raibaldi Philipp Kozin, PhD, MBA Marcus Scholle Marcus Parade, 🌞💡✅ MBA , Marcin Gwóźdź Dr.-Ing. Eike Wolfram Schäffer Ivo van Breukelen Constantin Weiss Ömer G. Ismail Durgun (Assoc. Prof.) Robert 지영 Liebhart Christian Kampf 康可安 💊 Lukas Christen Billy Cogum Florian Palatini Milos Kucera Amine BOUDER Best Quotes of the Day 1O1 Daily Inspiration 1O0 Daily Inspiration 1O1 LORESCAR SL #AI #Cybersecurity #AIgovernance #DataSecurity #Compliance
To view or add a comment, sign in
-
🔐 Why Insider Risk Now Includes AI We’ve long viewed insider threats as risks posed by employees, contractors, and vendors with access to sensitive data. But as AI becomes embedded in business workflows, it’s time to rethink the definition of “insider.” At Morefield, we’re seeing AI systems evolve into trusted agents — with the permissions, autonomy, and access once reserved for people. Here are a few key takeaways: • 🤖 Privileged Access + Autonomy: AI tools are now making decisions, accessing data, and acting with little oversight — just like a human insider. • 🧠 New Attack Surface: Threats such as prompt-injection, data poisoning, and misuse of generative AI are turning “tools” into risk vectors. • 🚨 Governance Gaps: If your insider risk program only monitors people, you’re missing half the story. AI systems need their own access controls, audit logs, and oversight. 👉 Your next move: treat AI like a privileged user — assign roles, limit access, monitor behavior, and enforce model governance. 📖 Read more here: https://hubs.ly/Q03PBWD10 #Cybersecurity #InsiderRisk #AI #Governance #ZeroTrust #Morefield #DigitalRisk #TechTrends
To view or add a comment, sign in