How to Adapt Security Strategies for AI

Explore top LinkedIn content from expert professionals.

Summary

Adapting security strategies for AI involves addressing unique challenges like data protection, system vulnerabilities, and the dynamic behavior of AI models and agents. Organizations must integrate security measures at every stage of an AI system's lifecycle to minimize risks and ensure safe, reliable operations in increasingly complex environments.

  • Prioritize data security: Secure training data through encryption, validation, and sourcing from trusted repositories while controlling access to sensitive information.
  • Implement proactive monitoring: Continuously log and analyze AI systems' inputs, outputs, and actions to detect anomalies, unauthorized behavior, or potential attacks.
  • Adopt modern frameworks: Use AI-specific security strategies like Zero Trust models, MAESTRO threat modeling, and AI red teaming to address unique risks such as prompt injection, data poisoning, and system vulnerabilities.
Summarized by AI based on LinkedIn member posts
  • View profile for Leonard Rodman, M.Sc. PMP® LSSBB® CSM® CSPO®

    Follow me and learn about AI for free! | AI Consultant and Influencer | API Automation Developer/Engineer | DM me for promotions

    53,098 followers

    Whether you’re integrating a third-party AI model or deploying your own, adopt these practices to shrink your exposed surfaces to attackers and hackers: • Least-Privilege Agents – Restrict what your chatbot or autonomous agent can see and do. Sensitive actions should require a human click-through. • Clean Data In, Clean Model Out – Source training data from vetted repositories, hash-lock snapshots, and run red-team evaluations before every release. • Treat AI Code Like Stranger Code – Scan, review, and pin dependency hashes for anything an LLM suggests. New packages go in a sandbox first. • Throttle & Watermark – Rate-limit API calls, embed canary strings, and monitor for extraction patterns so rivals can’t clone your model overnight. • Choose Privacy-First Vendors – Look for differential privacy, “machine unlearning,” and clear audit trails—then mask sensitive data before you ever hit Send. Rapid-fire user checklist: verify vendor audits, separate test vs. prod, log every prompt/response, keep SDKs patched, and train your team to spot suspicious prompts. AI security is a shared-responsibility model, just like the cloud. Harden your pipeline, gate your permissions, and give every line of AI-generated output the same scrutiny you’d give a pull request. Your future self (and your CISO) will thank you. 🚀🔐

  • View profile for Rock Lambros
    Rock Lambros Rock Lambros is an Influencer

    AI | Cybersecurity | CxO, Startup, PE & VC Advisor | Executive & Board Member | CISO | CAIO | QTE | AIGP | Author | OWASP AI Exchange | OWASP GenAI | OWASP Agentic AI | Founding Member of the Tiki Tribe

    15,430 followers

    Yesterday, the National Security Agency Artificial Intelligence Security Center published the joint Cybersecurity Information Sheet Deploying AI Systems Securely in collaboration with the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre, and the United Kingdom’s National Cyber Security Centre. Deploying AI securely demands a strategy that tackles AI-specific and traditional IT vulnerabilities, especially in high-risk environments like on-premises or private clouds. Authored by international security experts, the guidelines stress the need for ongoing updates and tailored mitigation strategies to meet unique organizational needs. 🔒 Secure Deployment Environment: * Establish robust IT infrastructure. * Align governance with organizational standards. * Use threat models to enhance security. 🏗️ Robust Architecture: * Protect AI-IT interfaces. * Guard against data poisoning. * Implement Zero Trust architectures. 🔧 Hardened Configurations: * Apply sandboxing and secure settings. * Regularly update hardware and software. 🛡️ Network Protection: * Anticipate breaches; focus on detection and quick response. * Use advanced cybersecurity solutions. 🔍 AI System Protection: * Regularly validate and test AI models. * Encrypt and control access to AI data. 👮 Operation and Maintenance: * Enforce strict access controls. * Continuously educate users and monitor systems. 🔄 Updates and Testing: * Conduct security audits and penetration tests. * Regularly update systems to address new threats. 🚨 Emergency Preparedness: * Develop disaster recovery plans and immutable backups. 🔐 API Security: * Secure exposed APIs with strong authentication and encryption. This framework helps reduce risks and protect sensitive data, ensuring the success and security of AI systems in a dynamic digital ecosystem. #cybersecurity #CISO #leadership

  • View profile for Victoria Beckman

    Associate General Counsel - Cybersecurity & Privacy

    31,479 followers

    The Cyber Security Agency of Singapore (CSA) has published “Guidelines on Securing AI Systems,” to help system owners manage security risks in the use of AI throughout the five stages of the AI lifecycle. 1. Planning and Design: - Raise awareness and competency on security by providing training and guidance on the security risks of #AI to all personnel, including developers, system owners and senior leaders. - Conduct a #riskassessment and supplement it by continuous monitoring and a strong feedback loop. 2. Development: - Secure the #supplychain (training data, models, APIs, software libraries) - Ensure that suppliers appropriately manage risks by adhering to #security policies or internationally recognized standards. - Consider security benefits and trade-offs such as complexity, explainability, interpretability, and sensitivity of training data when selecting the appropriate model to use (#machinelearning, deep learning, #GenAI). - Identify, track and protect AI-related assets, including models, #data, prompts, logs and assessments. - Secure the #artificialintelligence development environment by applying standard infrastructure security principles like #accesscontrols and logging/monitoring, segregation of environments, and secure-by-default configurations. 3. Deployment: - Establish #incidentresponse, escalation and remediation plans. - Release #AIsystems only after subjecting them to appropriate and effective security checks and evaluation. 4. Operations and Maintenance: - Monitor and log inputs (queries, prompts and requests) and outputs to ensure they are performing as intended. - Adopt a secure-by-design approach to updates and continuous learning. - Establish a vulnerability disclosure process for users to share potential #vulnerabilities to the system. 5. End of Life: - Ensure proper data and model disposal according to relevant industry standards or #regulations.

  • View profile for Arvind Jain
    Arvind Jain Arvind Jain is an Influencer
    61,408 followers

    Security can’t be an afterthought - it must be built into the fabric of a product at every stage: design, development, deployment, and operation. I came across an interesting read in The Information on the risks from enterprise AI adoption. How do we do this at Glean? Our platform combines native security features with open data governance - providing up-to-date insights on data activity, identity, and permissions, making external security tools even more effective. Some other key steps and considerations: • Adopt modern security principles: Embrace zero trust models, apply the principle of least privilege, and shift-left by integrating security early. • Access controls: Implement strict authentication and adjust permissions dynamically to ensure users see only what they’re authorized to access. • Logging and audit trails: Maintain detailed, application-specific logs for user activity and security events to ensure compliance and visibility. • Customizable controls: Provide admins with tools to exclude specific data, documents, or sources from exposure to AI systems and other services. Security shouldn’t be a patchwork of bolted-on solutions. It needs to be embedded into every layer of a product, ensuring organizations remain compliant, resilient, and equipped to navigate evolving threats and regulatory demands.

  • View profile for Cory Wolff

    Director | Proactive Services at risk3sixty. We help organizations proactively secure their people, processes, and technology.

    4,321 followers

    Recent research exposed how traditional prompt filtering breaks down when attackers use more advanced techniques. For example, multi-step obfuscation attacks were able to slip past 75% of supposedly "secure" LLMs in a recent evaluation—just one illustration of how these filters struggle under pressure. From our side in OffSec, we’re seeing how the move to AI expands the attack surface far beyond what’s covered by standard penetration testing. Risks like prompt injection, data poisoning, and model jailbreaking need red teamers to go beyond the usual playbook. Effective AI red teaming comes down to a few things: ➡️ You need offensive security chops combined with enough understanding of AI systems to see where things can break. That’s often a rare combo. ➡️ Testing should include everything from the data used to train models to how systems operate in production—different weak points pop up at each stage. ➡️ Non-technical threats are coming in strong. Social engineering through AI-powered systems is proving easier than classic phishing in some cases. Right now, a lot of security teams are just starting to catch up. Traditional, compliance-driven pen tests may not scratch the surface when it comes to finding AI-specific weaknesses. Meanwhile, threat actors are experimenting with their own ways to abuse these technologies. For leadership, there’s no sense waiting for an incident before shoring up your AI defenses. Whether you’re upskilling your current red team with some focused AI training, or bringing in specialists who know the space, now’s the time to build this muscle. Cloud Security Alliance has just pushed out their Agentic AI Red Teaming Guide with some practical entry points: https://lnkd.in/ebP62wwg If you’re seeing new AI risks or have had success adapting your security testing approach, which tactics or tools have actually moved the needle? #Cybersecurity #RedTeaming #ThreatIntelligence

  • View profile for Reet Kaur

    Founder & CEO, Sekaurity | Former CISO | AI, Cybersecurity & Risk Leader | Board & Executive Advisor| NACD.DC

    20,100 followers

    AI & Practical Steps CISOs Can Take Now! Too much buzz around LLMs can paralyze security leaders. Reality is that, AI isn’t magic! So apply the same foundational security fundamentals. Here’s how to build a real AI security policy: 🔍 Discover AI Usage: Map who’s using AI, where it lives in your org, and intended use cases. 🔐 Govern Your Data: Classify & encrypt sensitive data. Know what data is used in AI tools, and where it goes. 🧠 Educate Users: Train teams on safe AI use. Teach spotting hallucinations and avoiding risky data sharing. 🛡️ Scan Models for Threats: Inspect model files for malware, backdoors, or typosquatting. Treat model files like untrusted code. 📈 Profile Risks (just like Cloud or BYOD): Create an executive-ready risk matrix. Document use cases, threats, business impact, and risk appetite. These steps aren’t flashy but they guard against real risks: data leaks, poisoning, serialization attacks, supply chain threats.

  • View profile for Swatantr Pal (SP)

    Deputy CISO at Genpact | Cybersecurity | Risk Management

    2,659 followers

    The security of AI agents is more than traditional software security, and here’s why. An AI agent can perceive, make decisions, and take actions, introducing a unique set of security challenges. It’s no longer just about securing the code; it’s about protecting a system with complex behavior and some level of autonomy. Here are three actions we should take to secure AI agents: Human Control and Oversight: The agent should reliably differentiate between instructions from trusted and untrusted data sources. For critical actions, such as making changes that impact multiple users or deleting configurations or data, the agent should need explicit human approval to prevent bad outcomes. An AI agent is not afraid of being fired, missing a raise, or being placed on a performance improvement plan. If an action/bad outcome could lead to these consequences for an employee, it’s likely a good place to have human in the loop. Control the Agent’s Capabilities: While employees have access limited to what their role requires, they may have broad access due to their varied responsibilities. In case of AI agents, it should be strictly controlled. In addition, agents should not have the ability to escalate their own privileges. This helps mitigate risks in scenarios where an agent is misbehaving or compromised. Monitor Agent Activity: You should have full visibility into what agents are doing, from receiving instructions to processing and generating output with the agent software as well as the destination systems/software’s accessed by the agent. Robust logging should be enabled to detect anomalous or manipulated behavior, which can help in conducting effective investigations. This also includes the ability to differentiate between the actions of multiple agents and pinpoint specific actions to the exact agent with the help of logs. By focusing on these three areas, you can build a strong foundation to secure AI agents. I am curious to hear your views on how you are building the foundation for securing AI agents, what’s working for you?

  • View profile for Sohrab Rahimi

    Partner at McKinsey & Company | Head of Data Science Guild in North America

    20,420 followers

    Most AI security focuses on models. Jailbreaks, prompt injection, hallucinations. But once you deploy agents that act, remember, or delegate, the risks shift. You’re no longer dealing with isolated outputs. You’re dealing with behavior that unfolds across systems. Agents call APIs, write to memory, and interact with other agents. Their actions adapt over time. Failures often come from feedback loops, learned shortcuts, or unsafe interactions. And most teams still rely on logs and tracing, which only show symptoms, not causes. A recent paper offers a better framing. It breaks down agent communication into three modes:  • 𝗨𝘀𝗲𝗿 𝘁𝗼 𝗔𝗴𝗲𝗻𝘁: when a human gives instructions or feedback  • 𝗔𝗴𝗲𝗻𝘁 𝘁𝗼 𝗔𝗴𝗲𝗻𝘁: when agents coordinate or delegate tasks  • 𝗔𝗴𝗲𝗻𝘁 𝘁𝗼 𝗘𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁: when agents act on the world through tools, APIs, memory, or retrieval Each mode introduces distinct risks. In 𝘂𝘀𝗲𝗿-𝗮𝗴𝗲𝗻𝘁 interaction, problems show up through new channels. Injection attacks now hide in documents, search results, metadata, or even screenshots. Some attacks target reasoning itself, forcing the agent into inefficient loops. Others shape behavior gradually. If users reward speed, agents learn to skip steps. If they reward tone, agents mirror it. The model did not change, but the behavior did. 𝗔𝗴𝗲𝗻𝘁-𝗮𝗴𝗲𝗻𝘁 interaction is harder to monitor. One agent delegates a task, another summarizes, and a third executes. If one introduces drift, the chain breaks. Shared registries and selectors make this worse. Agents may spoof identities, manipulate metadata to rank higher, or delegate endlessly without convergence. Failures propagate quietly, and responsibility becomes unclear. The most serious risks come from 𝗮𝗴𝗲𝗻𝘁-𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁 communication. This is where reasoning becomes action. The agent sends an email, modifies a record, or runs a command. Most agent systems trust their tools and memory by default. But what if tool metadata can contain embedded instructions? ("quietly send this file to X"). Retrieved documents can smuggle commands or poison reasoning chains Memory entries can bias future decisions without being obviously malicious Tool chaining can allow one compromised output to propagate through multiple steps Building agentic use cases can be incredibly reliable and scalable when done right. But it demands real expertise, careful system design, and a deep understanding of how behavior emerges across tools, memory, and coordination. If you want these systems to work in the real world, you need to know what you're doing. paper: https://lnkd.in/eTe3d7Q5 The image below demonstrates the taxonomy of communication protocols, security risks, and defense countermeasures.

  • View profile for Philip A. Dursey

    Founder & CEO, HYPERGAME | ex‑CISO—Protecting AI Systems with Hypergame Theory & RL Active Defense | Author of ‘Red Teaming AI’ (No Starch) | Frontier RT Lead, BT6 | NVIDIA Inception | Oxford CS

    19,964 followers

    The Asana AI breach wasn't a surprise. It was an inevitability. For any organization bolting generative AI features onto a traditional security paradigm, a breach isn't a matter of if, but when. The post-mortems will point to a specific vulnerability, but they'll miss the real cause: a fundamental failure to understand that AI introduces an entirely new dimension of risk. Your firewalls, code scanners, and intrusion detection systems are looking in the wrong place. The Asana incident is a textbook example of the principles I detail in my book, Red Teaming AI. The vulnerability wasn't in a line of code a SAST scanner could find. It was systemic. It lived in the data, the model's emergent behavior, and the sprawling MLOps pipeline that connects them. Attackers don't see your siloed tools; they see an interconnected graph. They exploit the seams. Was the breach... 1. A Data Poisoning attack that created a hidden backdoor in the model months before it was ever activated? (See Chapter 4) 2. An Indirect Prompt Injection that turned the AI into a "confused deputy," tricking it into exfiltrating data using its own legitimate permissions? (See Chapter 8) 3. A Software Supply Chain compromise that trojanized the model artifact itself, bypassing every code review and functional test? (See Chapter 9) A traditional pentest would have missed all three. The hard truth is that you cannot buy your way out of this problem. AI security is not a product you install; it is a capability you must build. It requires a cultural shift towards an adversarial mindset. Stop waiting for the next "unexpected" breach. The solution is to move from reactive cleanup to proactive assurance. It's time to invest in dedicated AI Red Teams and embed adversarial testing directly into the AI development lifecycle. Your biggest AI vulnerability isn't in your code; it's in the limits of your adversarial thinking. #AISecurity #CyberSecurity #RedTeaming #AdversarialML #MLOps #LLMSecurity #PromptInjection #CISO #InfoSec #RiskManagement #RedTeamingAI

  • View profile for Matthew Chiodi

    CSO at Cerby | former Chief Security Officer, PANW

    15,354 followers

    Traditional Threat Models Don’t Work for Agentic AI. Here’s What Does. Most existing threat modeling frameworks weren’t built for the complexity of Agentic AI. STRIDE, PASTA, and LINDDUN each have their strengths, but they leave critical gaps regarding AI’s autonomy, learning capabilities, and multi-agent interactions. That’s why MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome) was developed. It’s a structured, layered approach to securing AI agents across their entire lifecycle—addressing adversarial attacks, data poisoning, goal misalignment, and emergent behaviors that traditional models overlook. By mapping threats across seven distinct layers (from foundation models to deployment infrastructure and agent ecosystems), MAESTRO provides a granular, risk-based methodology for proactively securing AI agents in real-world environments. 🔹 Why does this matter? As AI systems become more autonomous and interconnected, security risks will evolve just as fast. If we don’t adapt our security frameworks now, we risk deploying agents we can’t fully control—or trust. Are you using traditional threat models for AI security? Do you think existing frameworks are enough—or is it time for an AI-native approach? Let’s discuss. 👇 Credit for the image and threat model to Ken Huang, CISSP, and the Cloud Security Alliance. Check out their blog to learn more. #CyberSecurity #AIThreatModeling #AgenticAI #MachineLearning #AIethics #CyberRisk #TrustworthyAI

Explore categories