Explaining Humans to AI - the new paradigm of Collaborative Intelligence
Humans and machines can enhance each other’s strengths. Source: Harvard Business Review

Explaining Humans to AI - the new paradigm of Collaborative Intelligence

Artificial Intelligence has made stunning leaps in the past year. Algorithms are now doing things — like designing drugs, writing wedding vows, negotiating deals, creating illustrations, composing music — that have always been the sole prerogative of humans.

AI is seen both as a threat or a boon. Yet the advent of sophisticated AI raises another big question that’s received far less attention: How does this change our sense of what it means to be Human? In the face of ever more intelligent machines, are we still… well, special?

Humanity has always seen itself as unique in the universe,” says Benoît Monin, a professor of Organizational Behavior at Stanford Graduate School of Business. “When the contrast was to animals, we pointed to our use of language, reason, and logic as defining traits. So what happens when the phone in your pocket is suddenly better than you at these things?”

If our sense of identity is threatened by AI, will we change our criteria for what it means to be human? Or should we seek to be better understood by AI while parallelly we seek to better understand how AI works? (Explainable AI).

We’re told AI neural networks ‘learn’ the way humans do. That Neural Networks were designed to mimic the Human Brain.

Neural nets are typically trained by “supervised learning”. So they’re presented with many examples of an input and the desired output, and then gradually the connection weights are adjusted until the network “learns” to produce the desired output.

To learn a language task, a neural net may be presented with a sentence one word at a time, and will slowly learns to predict the next word in the sequence.

This is very different from how humans typically learn. Most human learning is “unsupervised”, which means we’re not explicitly told what the “right” response is for a given stimulus. We have to work this out ourselves.

For instance, children aren’t given instructions on how to speak, but learn this through a complex process of exposure to adult speech, imitation, and feedback.

Neural Networks can learn in ways we can’t.

An even more fundamental difference concerns the way Neural Nets learn. In order to match up a stimulus with a desired response, neural nets use an algorithm called “backpropagation” to pass errors backward through the network, allowing the weights to be adjusted in just the right way.

However, it’s widely recognized by neuroscientists that backpropagation can’t be implemented in the brain, as it would require external signals that just don’t exist. Some researchers have proposed variations of backpropagation could be used by the brain, but so far there is no evidence human brains can use such learning methods.

So if we ever to have a future where Humans and AI interact, work and even co-exist, then we need to think of having both Humans and AI better explain themselves to each other.

This is the foundation for Collaborative Intelligence - a synergistic and mutually beneficial confluence of Human Intelligence and Artificial Intelligence.

Why Explaining Humans to AI Matters

The concept of Explainable AI (XAI) has gained significant traction over the past few years, focusing on making AI Systems' decision-making processes transparent and understandable to humans. However, the inverse of this relationship—explaining Humans to AI—remains relatively unexplored. This approach emphasizes equipping AI systems with a deep and nuanced understanding of human behavior, psychology, values, emotions, and contexts. It is an essential advancement to ensure AI interacts with humans more effectively, ethically, and empathetically in the era of Collaborative Intelligence.

As AI Systems become increasingly integrated into our daily lives, from healthcare and finance to social media and autonomous vehicles, the need for these systems to understand human behavior, emotions, and motivations becomes paramount. Without a deep comprehension of human nature, AI risks making decisions that, while logically sound, may be emotionally or ethically inappropriate.

Consider, for instance, an AI System tasked with scheduling meetings. Without understanding the nuances of human social dynamics, it might schedule back-to-back meetings without breaks, failing to account for the human need for rest and mental preparation. Or consider an AI-driven customer service chatbot that fails to recognize the emotional state of a frustrated customer, potentially exacerbating the situation.

The Importance of Explaining Humans to AI

Enhanced Interaction and Usability

Contextual Understanding: AI Systems that understand Human behavior and context can provide more relevant and personalized responses. For instance, Virtual Assistants that grasp cultural nuances and personal preferences can interact more naturally with users. Consider a Virtual Assistant that recognizes the cultural significance of certain holidays or customs; it can offer more appropriate suggestions and responses.

Emotional Intelligence: Understanding Human emotions allows AI to respond empathetically, enhancing user satisfaction and trust. This capability is crucial in applications like mental health support, customer service, and social robots. For example, an AI Therapist that can detect signs of distress in a patient's voice can respond with empathy, offering comfort and appropriate interventions.

Ethical and Fair Decision-Making

Bias Mitigation: By understanding the complexities of human societies and the systemic biases that exist, AI Systems can be designed to mitigate rather than perpetuate these biases. For instance, an AI System used in hiring that understands the historical biases against certain groups can be programmed to counteract these biases, promoting fairer outcomes.

Value Alignment: AI Systems need to understand and align with human values to make ethical decisions. This requires a deep understanding of diverse human perspectives and moral frameworks. For example, in healthcare, an AI System that understands the value of patient autonomy and informed consent can make decisions that respect these principles.

Improved AI Autonomy

Adaptive Learning: AI Systems that can interpret human intentions and adapt to changing human behaviors are better suited for autonomous tasks, from self-driving cars to AI-driven healthcare diagnostics. For instance, a self-driving car that understands pedestrian behavior can predict and respond to unexpected actions more effectively.

Safety and Trust: Autonomous Systems that understand human behavior can predict and respond to human actions more accurately, ensuring safer interactions in shared environments. For example, Industrial Robots working alongside humans can be designed to anticipate human movements, reducing the risk of accidents.

How to Explain Humans to AI

Explaining Humans to AI requires a multidisciplinary approach, incorporating insights from psychology, sociology, anthropology, philosophy, and education. Here are some strategies to achieve this:

  1. Human-AI Dialogue: Design Conversational Interfaces that allow humans to share their thoughts, feelings, and experiences with AI systems. This enables AI to learn about Human emotions, values, and behaviors.
  2. Human-Centered Data: Collect and annotate data that reflects human behavior, emotions, and social interactions. This data can be used to train AI Systems to recognize and understand human patterns.
  3. Cognitive Architectures: Develop AI frameworks that incorporate human cognitive and emotional models. This enables AI systems to simulate human thought processes and decision-making.
  4. Social Learning: Enable AI systems to learn from humans through observation, imitation, and reinforcement. This allows AI to develop social skills and understand human norms.
  5. Multidisciplinary Collaboration: Encourage researchers from diverse fields to collaborate with AI developers. This ensures that AI systems are designed with a deep understanding of human complexities.

Key Aspects of Human Explainability

  1. Emotional Intelligence: One of the most critical aspects of human behavior that AI needs to understand is emotions. This includes not just recognizing emotional states, but understanding their causes, how they influence decision-making, and how they can be appropriately responded to. Teaching AI about the complexity of human emotions could lead to more empathetic and emotionally intelligent systems.
  2. Cultural Context: Humans are deeply influenced by their cultural backgrounds, which shape their values, behaviors, and communication styles. Explaining cultural diversity and its implications to AI could help in developing systems that are more culturally sensitive and adaptable to different societal norms.
  3. Social Dynamics: Human interactions are governed by complex social rules and dynamics. Teaching AI about concepts like social hierarchies, group dynamics, and interpersonal relationships could improve its ability to navigate social situations and facilitate more natural human-AI interactions.
  4. Ethical Frameworks: Human decision-making is often guided by ethical considerations. Explaining various ethical frameworks, moral philosophies, and the concept of human rights to AI could help in developing systems that make more ethically sound decisions.
  5. Cognitive Biases and Irrationality: Humans are not always rational actors. We are subject to numerous cognitive biases and often make decisions based on emotions rather than logic. Teaching AI about these aspects of Human cognition could help it better predict and understand Human behavior.
  6. Creativity and Aesthetics: Human creativity and our appreciation for aesthetics are complex phenomena that AI systems often struggle to replicate or understand. Explaining the human creative process and our perception of beauty could lead to AI systems that generate more appealing and emotionally resonant content.
  7. Human Development and Learning: Understanding how Humans grow, learn, and develop over time is crucial for AI systems that interact with humans of different ages or are involved in education.
  8. Physical and Physiological Needs: Humans have basic physical needs that significantly influence our behavior. AI systems need to understand these needs to make decisions that account for human well-being.

Benefits of Explaining Humans to AI

Explaining Humans to AI offers numerous benefits, including:

  1. Improved Collaboration: By understanding Human values and behaviors, AI Systems can collaborate more effectively with humans, leading to better decision-making and outcomes.
  2. Enhanced Empathy: AI Systems that understand human emotions and empathy can develop more supportive and caring interactions, improving human well-being.
  3. Increased Trust: When AI Systems demonstrate an understanding of human values and principles, humans are more likely to trust AI, leading to increased adoption and acceptance.
  4. More Effective Communication: By understanding Human communication styles and preferences, AI Systems can communicate more effectively, reducing misunderstandings and errors.
  5. Value Alignment: By aligning with Human values and ethics, AI Systems can reduce the risk of harmful or unethical behavior.

Challenges in Implementing Human Explainability

While the concept of explaining Humans to AI is promising, it comes with significant challenges:

  1. Complexity and Variability: Human behavior is incredibly complex and varies greatly between individuals and cultures. Codifying this complexity into a form that AI can understand and apply is a monumental task.
  2. Subjectivity: Many aspects of Human experience, such as emotions and aesthetic preferences, are subjective and difficult to quantify.
  3. Ethical Concerns: There are ethical considerations in deciding what aspects of Human nature should be explained to AI and how this knowledge might be used.
  4. Dynamic Nature: Human societies and cultures are not static; they evolve over time. AI systems would need to be updated regularly to reflect these changes.
  5. Bias and Representation: Care must be taken to ensure that the Human nature being explained to AI is representative of all humanity, not just a subset of the population.

Practical Applications

Healthcare

Patient Support: AI Systems that understand patient emotions and concerns can provide better support and improve adherence to treatment plans. For example, a virtual Health Assistant that detects anxiety in a patient's voice can offer calming advice and reminders about medication.

Personalized Medicine: Understanding patient lifestyles and preferences helps in designing personalized medical interventions. An AI System that considers a patient's daily routines and preferences can recommend lifestyle changes that are more likely to be adopted.

Customer Service

Empathetic Responses: AI chatbots that understand customer emotions can resolve issues more effectively and enhance customer satisfaction. For instance, a customer service bot that detects frustration can escalate the issue to a human representative promptly.

Proactive Assistance: By anticipating customer needs based on past behavior, AI Systems can offer proactive solutions. For example, an AI System that recognizes patterns in customer inquiries can preemptively offer solutions to common problems.

Education

Adaptive Learning Platforms: Educational AI systems that understand student emotions and learning styles can offer tailored educational experiences. For instance, an AI tutor that recognizes a student's confusion can provide additional explanations and resources.

Mentorship and Support: AI mentors can provide personalized guidance and emotional support to students. An AI system that understands a student's goals and challenges can offer targeted advice and encouragement.

Social Robots

Companion Robots: Robots that understand and respond to Human emotions can offer companionship and support to the elderly and those in need of social interaction. For example, a robot that detects loneliness in an elderly person can initiate conversations and activities to provide comfort.

Interactive Toys: AI-powered toys that understand children’s emotions can provide engaging and supportive interactions. For instance, a toy that recognizes a child's boredom can suggest new activities to keep them engaged.

Conclusion

The concept of Human Explainability represents a paradigm shift in our approach to AI development. Rather than focusing solely on making AI understandable to humans, it emphasizes the importance of making humans understandable to AI. This bidirectional approach to Explainability could be key to developing AI Systems that are not just intelligent, but also empathetic, ethically aware, and truly beneficial to humanity.

As we continue to advance in AI technology, it's crucial that we invest time and resources into explaining the complexities of Human nature to these systems. Only by doing so can we hope to create AI that not only mimics Human Intelligence but also understands and respects the nuances of Human experience.

The journey towards Human Explainability is likely to be long and challenging, requiring collaboration between AI researchers, psychologists, anthropologists, ethicists, and many other disciplines. However, the potential benefits - more Human-centric AI, improved Human-AI collaboration, and possibly even a deeper understanding of ourselves - make it a journey worth undertaking.

As we stand on the brink of an AI-driven future, ensuring that these powerful systems truly understand us may be one of the most important tasks we face. The concept of Human Explainability offers a promising path forward in this crucial endeavor.


Misael Castro Rosas

ICT Program Manager | Sr. Project Manager | GenAI Practitioner | AI Prompting Intelligence | Former PMI Infinity Product Tester | Speaker (AI Transformation) | Co-Host of Inteligencia Colectiva | Negotiations Coach

1y

Very insightful and thought provoking article, I concur with you. The continues development of AI for a better future should be bidirectional approach and multidisciplinary collaboration is a MUST it cannot be only on the hands of data scientits, developers and tecnnical poeple. 

To view or add a comment, sign in

More articles by Harsha Srivatsa

Others also viewed

Explore content categories