Normal view MARC view ISBD view

Explainable Human-AI Interaction [electronic resource] : A Planning Perspective / by Sarath Sreedharan, Anagha Kulkarni, Subbarao Kambhampati.

By: Sreedharan, Sarath [author.].
Contributor(s): Kulkarni, Anagha [author.] | Kambhampati, Subbarao [author.] | SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Synthesis Lectures on Artificial Intelligence and Machine Learning: Publisher: Cham : Springer International Publishing : Imprint: Springer, 2022Edition: 1st ed. 2022.Description: XX, 164 p. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031037672.Subject(s): Artificial intelligence | Machine learning | Neural networks (Computer science)  | Artificial Intelligence | Machine Learning | Mathematical Models of Cognitive Processes and Neural NetworksAdditional physical formats: Printed edition:: No title; Printed edition:: No title; Printed edition:: No titleDDC classification: 006.3 Online resources: Click here to access online
Contents:
Preface -- Acknowledgments -- Introduction -- Measures of Interpretability -- Explicable Behavior Generation -- Legible Behavior -- Explanation as Model Reconciliation -- Acquiring Mental Models for Explanations -- Balancing Communication and Behavior -- Explaining in the Presence of Vocabulary Mismatch -- Obfuscatory Behavior and Deceptive Communication -- Applications -- Conclusion -- Bibliography -- Authors' Biographies -- Index.
In: Springer Nature eBookSummary: From its inception, artificial intelligence (AI) has had a rather ambivalent relationship with humans-swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic human‒AI interaction is that the AI systems' behavior be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. At a minimum, AI agents need approximations of the human's task and goal models, as well as the human's model of the AI agent's task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), andbe ready to provide customized explanations when needed. The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human expectations or change those expectations through explanatory communication. While the focus of the book is on cooperative scenarios, it also covers how the same mental models can be used for obfuscation and deception. The book also describes several real-world application systems for collaborative decision-making that are based on the framework and techniques developed here. Although primarily driven by the authors' own research in these areas, every chapter will provide ample connections to relevant research from the wider literature. The technical topics covered in the book are self-contained and are accessible to readers with a basic background in AI.
    average rating: 0.0 (0 votes)
No physical items for this record

Preface -- Acknowledgments -- Introduction -- Measures of Interpretability -- Explicable Behavior Generation -- Legible Behavior -- Explanation as Model Reconciliation -- Acquiring Mental Models for Explanations -- Balancing Communication and Behavior -- Explaining in the Presence of Vocabulary Mismatch -- Obfuscatory Behavior and Deceptive Communication -- Applications -- Conclusion -- Bibliography -- Authors' Biographies -- Index.

From its inception, artificial intelligence (AI) has had a rather ambivalent relationship with humans-swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic human‒AI interaction is that the AI systems' behavior be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. At a minimum, AI agents need approximations of the human's task and goal models, as well as the human's model of the AI agent's task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), andbe ready to provide customized explanations when needed. The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human expectations or change those expectations through explanatory communication. While the focus of the book is on cooperative scenarios, it also covers how the same mental models can be used for obfuscation and deception. The book also describes several real-world application systems for collaborative decision-making that are based on the framework and techniques developed here. Although primarily driven by the authors' own research in these areas, every chapter will provide ample connections to relevant research from the wider literature. The technical topics covered in the book are self-contained and are accessible to readers with a basic background in AI.

There are no comments for this item.

Log in to your account to post a comment.