From the course: Where to Start with AI and Business Strategy with Chris McKay

How can you proactively address ethical dilemmas in AI?

From the course: Where to Start with AI and Business Strategy with Chris McKay

How can you proactively address ethical dilemmas in AI?

- Chris, how can organizations proactively address potential ethical dilemmas during the AI development process? The first thing I would say is ethics is so important. We could have an entire podcast, multiple, like an entire podcast series on ethics in AI. When it comes to ethics, I think of AI responsibility, right? You'll hear a lot of companies talk about responsible AI, and it's the same thing. But there are four big considerations I think companies need to be looking at to ensure that they're on the right side not just of the law, but of history. And so you want to look at compliance and regulation. You want to look at data privacy and security. You want to look at risk assessment and mitigation. And then just ethical considerations in terms of bias and so on. And what I will say is the regulatory landscape is changing dramatically right now. We saw where you had a lot of leadership within Europe in terms of the AI Act that they worked on, that went into effect this week actually. And in the U.S., you're seeing the Voluntary AI Framework that has been put forward by the Biden Administration that so many companies are getting behind, which is amazing. And you're seeing a lot more conversations around, well, when we're releasing AI models, which government regulatory body is overseeing them. What about when AI models mess up? Who needs to be blamed? What oversight needs to be put in place? And I am all for innovation, but I also am very keenly aware that we have made mistakes in the past by not regulating certain industries, like social media. And we waited too long to ask the important questions. And so my goal with investing and promoting literacy is because I do believe that a more literate population is going to ask better questions. And we will ultimately contribute to the discourse around AI when we are more educated on some of the challenges, some of the issues, the good things and the bad things. Data privacy and security is a major piece of the puzzle to look at. We have seen so many breaches in terms of security. We're seeing the rise of deepfakes. AI, again, accelerates so much when it comes to the challenges around cybersecurity. And what I often say for businesses, be very deliberate in terms of the data that you collect. Because you can collect something, doesn't mean that you should collect it, right? And I get that data is the foundation of a lot of the tools and the AI technologies that we're utilizing. But be transparent with your users as to what you're collecting, and get their consent because that trust is going to be so important at so many different layers. When it comes to making decisions, if you're building tools or using AI, I think it's going to be important for you to also be transparent as to how decisions are being made. Are you relying on the AI 100%? Is there a human in the loop? I think being thoughtful about how you're thinking through things like decision making is also going to be important. And then when it comes to risk, you know, we had things like the CrowdStrike issue a few weeks ago that completely made a mess of so many different industries. And that comes to risk assessment and mitigation, right? And we often talk about a risk assessment matrix where we look at things that are high likelihood versus low likelihood, high impact versus low impact. And it's not that you are not focusing on the risk, but you'll be able to better understand what your critical risks are versus your frequent risk or your minor risk. And so preparing for things like this, AI models often get jailbroken, and we're at a period of time where things are changing rapidly. And so you roll out a technology that for some reason, again, because you're tracking the news and you're staying up to date, you realize that the model has a flaw and you need to put something in place. It's going to be important that you have thought about mitigation measures and how you can respond to risk. We have seen where big companies like Google, with their image models, had issues, and they had to pull it. OpenAI, they had concerns around some of the audio features, they had to pull it. And so you also need to be aware that as you're operating in this AI landscape, you will need to be able to react very quickly to any issues that might pop up. And taking the time in the strategic planning phase to think through these, to ask the questions. Again, that's going to be important.

Contents