From the course: Artificial Intelligence and Business Strategy

Decision-Making by machines: Moral dilemmas

From the course: Artificial Intelligence and Business Strategy

Decision-Making by machines: Moral dilemmas

- AI is a general purpose technology whose applications cut across all aspects of life and work, and the capabilities of AI are growing rapidly. Two conclusions emerge from these basic observations. First, over time, we humans will let AI augment many of our decisions. Second, and more interestingly, we will let AI act autonomously in a growing number of contexts without any real-time input from us. Look at the use of AI to screen job applications submitted online. Most HR departments let AI do the screening autonomously without any real-time input from recruiters. Look at Google's use of AI to cool its data centers. Initially, Google used AI to recommend desired actions. Today, it lets AI manage the control systems autonomously. Look also at completely AI-driven hedge funds. Fund managers specify the broad parameters, but then let AI make buy/sell decisions without any real-time input. Projecting ahead to the next few years, AI will clearly become capable of making many more decisions better than humans. In many of these contexts, especially those dealing with serious outcomes, society will be forced to wrestle with moral dilemmas. Without a systematic and thorough resolution of these dilemmas, we will either have chaos or a slowdown in the deployment of AI technologies. Take the case of autonomy in cars. From keeping the car within its lane to self-parking, cars are steadily becoming more autonomous. They're also becoming safer, yet every time an autopilot causes a fatality, there is media outcry. In contrast, we hardly ever read about the 36,000 deaths in the U.S. alone annually from accidents caused by humans. Why? The answer lies in research findings, that being human, society seems willing to accept human frailties much more readily than errors attributable to machines. Now, imagine that sometime during this decade, cars become fully autonomous and 100% safe in terms of never making any mistake. Purely in terms of saving lives, it's obvious that people should then prefer such cars over driving themselves, yet the moral dilemmas will not go away. They may even become more difficult. In a variant of the well-known trolley problem, assume that this is 2027 and you are riding alone in a 100% safe autonomous car. Suddenly, something heavy drops from the truck in front of you. Should the car keep driving and hit the boxes, almost certainly causing great danger to you, swerve to the right, hit a minivan, endangering five people, or swerve to the left and hit a sedan, endangering only two people? What is morally right? Would the answer be any different if you're 25 and all of these seven other people in their 80s? What if the two people in the sedan are in their 20s, but the five people in the minivan in their 80s? Whatever choice a human driver made in any of these contexts, we would accept it as just unplanned, spontaneous human reaction, but if the choice is made by a machine, we will treat it as the result of a design decision, deliberately choosing one person's injury over another's. So, if you are one of the engineers designing the car, what heuristics would you build into the AI? As CEO, what guidance would you provide? This is not just an engineering problem; it's a moral question that requires input from not only lawyers and philosophers, but also legislators and regulators acting on behalf of society. Think about the applications of AI in your organization. What moral dilemmas do your company's leaders face today and what moral dilemmas will they face five years from now?

Contents