From the course: Artificial Intelligence and Business Strategy

A brief history of AI and its likely future

From the course: Artificial Intelligence and Business Strategy

A brief history of AI and its likely future

- While scientific efforts to mimic human intelligence go back centuries, the modern AI era began in the 1950s, concurrent with the invention of increasingly powerful digital computers. A 1956 workshop at Dartmouth College is widely regarded as the seminal event that launched systematic research in AI. In the first few decades, symbolic AI served as the dominant paradigm in AI. The thinking was that computers can be made intelligent by first decoding the facts and decision rules that experts, such as doctors, use to undertake cognitive tasks, and then codifying this knowledge into computer programs. Symbolic AI exhibited early promise with small demonstration programs. The heightened expectations led to the rise of so-called expert systems in the late 1970s and 1980s. Reality, however, proved to fall dramatically short of the high hopes. The fundamental problem with symbolic AI was that experts could verbalize only a small fraction of the knowledge in their brains. As a result, this type of AI could never be as smart as the experts, let alone smarter. The real breakthrough in AI and its exploding application in all walks of life came about in the early 2010s. This is when a new band of researchers were able to utilize the explosion in cheap computing power and the vast availability of internet-fed data to implement their ideas regarding deep neural networks. This new paradigm deferred radically from Symbolic AI. Unlike Symbolic AI's top-down approach, the neural network approach aimed to make computers intelligent by mimicking how babies learn. They see adults walking and want to emulate them. They try something and fall down, they try again, perhaps, a little differently. After many tries, they learn to fall less often. Further trial and error learning eventually makes them experts at walking and running. Could computers learn in a similar way? The neural network paradigm has proved to be hugely successful. So much so that people, such as Bill Gates and Elon Musk, have voiced alarm at the risks of super-intelligent AI one day making humans expendable. While super-intelligent AI is probably many years away, it's worth looking at some of the questions that AI researchers are working on today. Few-short learning; how to train an AI on small rather than big data? Transfer of learning; how to enable an AI trained in one domain to carry over the learning to another domain? Accelerated learning; how to dramatically accelerate learning by an AI, for example, by creating its own synthetic data, or by having a master AI and a student AI work in tandem? Explainable AI; how can we go inside the black box to understand the logic developed by an AI? Generative AI; how to develop an AI that can generate truly novel output? Multimodality; how to enable an AI to take in a diversity of data, such as audio, video, touch, et cetera, simultaneously, and come up with integrated learning? Human-computer interaction; how to create robots that can navigate a room full of people and understand what is going on and be of assistance? Merging of the human brain with AI; how to enable the human brain to communicate with the external world without any hardware connection at all? Neuralink, one of Elon Musk's companies is working on exactly this challenge. Once you solve this problem, a logical next step would be to merge human intelligence with artificial intelligence. Scary, but also exciting. It appears likely that many, if not all, of these questions will get answered within this decade. While you may not have all the answers, it would be useful to speculate how AI will affect the way tasks you are responsible for might be done differently five years from now and 10 years from now.

Contents