From the course: Securing the AI/ML Development Lifecycle: A Practical Guide to Secure AI Engineering
AI and ML security challenges
From the course: Securing the AI/ML Development Lifecycle: A Practical Guide to Secure AI Engineering
AI and ML security challenges
- [Instructor] Things like speed to market, features, and price are important to making sure a product is successful. But the truth is that it's hard to retain customers if a product puts their health, safety, or finances at risk. This means security is important anytime a company develops a product. Risks are highest when new technologies first come to the market. For example, in the early 1950s, the aerospace company De Havilland triumphantly introduced a new technological marvel, the world's first commercial jet liner. They called it the Comet 1, a sleek, modern, luxury travel experience that the world had never seen the likes of. While everything seemed fine on the surface, there were some very serious hidden problems. The Comet 1 used a square window design that looked impressive but introduced structural flaws. A subsequent investigation found that potentially critical fatigue cracks occurred that could cause catastrophic failure as a result of these windows. The impact of this for De Havilland was immense. Passengers lost confidence and stopped booking flights, their reputation sank, and a complete investigation and redesign of the Comet was required to determine what happened and how it could be addressed. Ultimately, De Havilland did fix the issues and they introduced a new model called the Comet 4, but by then, the early lead had been lost. New competitors like Boeing and Douglas had entered the market, and they were overtaken just a few years later. These same factors apply to AI applications as well. For example, can you imagine an AI triage assistant in a hospital that misclassifies patient symptoms? How about an industrial predictive maintenance system that leads to the release of environmental contaminants? What about a failure in an aviation decision support system or a navigation system? Granted, I've cherry-picked some pretty extreme examples here where security or operational failures directly impact life and safety, but even when stakes are lower, failures can impact the company's brand or put their market position at risk. Consider an AI portfolio balancing tool leading to the loss of a customer's savings. Or how about the loss of customer trust if personal data gets leaked? Even customer service, seemingly a safe-use case, can have negative impacts. For example, in February, 2024, a large language model system, or LLM, used by Air Canada misinformed a customer about the company's bereavement policy. In a civil resolution, the airline was forced to pay restitution to that customer as a direct result of misinformation the chatbot supplied. This was covered extensively in the press, representing a major brand impact to Air Canada. To be reliable, security and resilience need to be architectural criteria for AI products, just like they are with any other product. They need to be baked in from first principles and not bolted on when something goes wrong. This doesn't just mean protecting systems from hackers. It also means designing systems that can't be manipulated, misused, or tampered with in the first place. Systems that are instead trustworthy by design. Now that you've seen why security is so important for AI use, let's look at a high level at how you can address it within your pipelines. As we do, pay attention to how your own organization's pipelines work and how you can build a case for security in them.