How to Secure Your AI Agents with Data Governance

This title was summarized by AI from the post below.
View profile for Frank Gilbert

Senior Tech Executive & Cyberpsychologist | Uniting Technology Leadership, Business Strategy & Human Psychology for today’s complex challenges.

Zero Trust Has a Blind Spot—Your AI Agents #wortharead First if you think of AI as an agent, and I can understand why, because we call it that, don't think of it as an agent. It is Agentic, meaning it is Agent like ... acting on behalf of a person or persons. Name those people. They are the responsible people for that tool, that agentic ai tool. Long before this issue we have had tools that "suddenly", that is without discussion or approval or even awareness, started accessing databases and process flows. Typically some other IT department or 3rd party contractor would come in and need to access the data. They knew, or sometimes did not when it was Shadow IT related, that there would be questions and requirements, and they were in a hurry or simply unaware of the risks. We typically found out when performance issues cropped up, or after an update caused problems for their tools. Anyway, you can solve many of these challenges with a strong data access and management governance policy... backed up with solid monitoring. In this case make sure a person owns this tool and make sure your data is already classified and managed in a manner that requires any tool, including an agentic ai system, to abide by the rules you have in place for your data. #datagovernance #digitalmanagement #toolmanagement #privacy #security https://lnkd.in/excCJP55

To view or add a comment, sign in

Explore content categories