Shared Awareness, Adaptive Autonomy, and Trust Repair

Shared Awareness and Adaptive Autonomy for Trust Repair

The foundation of teamwork is well-calibrated mutual trust among team members. Mission success depends upon detecting, avoiding and rapidly repairing degradation or loss of trust among team members. While there exists scientific understanding for accomplishing trust repair in human-only teams, trust repair in mixed teams that include human operators and intelligent, autonomous agents (hereafter, “agents”) remains a largely unexplored topic of research.

The overall goal of our research is to enable trust for appropriate reliance and interdependency in teams composed of humans and robots: such teams may be found in any application domain that requires coordinated joint activity by humans and intelligent agents, whether those agents are embedded in cyber-physical systems (e.g., air traffic control; dock yard logistics) or embodied in robots (e.g., robots for assisted living; a surgical assistant). We hypothesize that establishing and maintaining trust depends upon alignment of mental models, which is at the core of team member shared awareness. Secondly, maintaining model alignment is integral to fluid changes in relative control authority (i.e., autonomy) as joint activity unfolds.

The objective of this research topic is to investigate, discover, understand, devise and explore methods whereby an intelligent, autonomous agent is able to detect and repair loss of trust by a human operator. A secondary objective is to investigate the role of adaptive adjustment of autonomy (including degradation of autonomy) in the event that human trust cannot be repaired by other means. These objectives require that we understand the nature of loss of trust and the process by which it may be restored. Since trust is a reciprocal relationship (in a dyad, among two actors) we are compelled to investigate this topic with respect to the very different natures of the two actors, human and agent, i.e., the relevant human psychology and the comparable mechanisms of an agent, and this is reflected in our methodology.

We expect the results of the research to be, foremost, a solid contribution to the theoretical understanding of trust between humans and intelligent autonomous agents. Secondly, success will yield an integrated model of the theory that is conducive to maturation, can be specialized for various systems and scenarios, and can inform the design and deployment of human-centered, adaptive autonomous agents. Key technical advances targeted are the ability of an autonomous agent to reason about, and dynamically adapt to, loss of human operator trust, thereby helping to maintain appropriate interdependence and assure positive outcomes.

Atkinson, D.J., Clancey, W.J. and Clark, M. Shared Awareness, Autonomy and Trust in Human Robot Teamwork. In Artificial Intelligence for Human-Robot Interaction. Papers from the 2014 AAAI Fall Symposium. Technical Report No. FSM14M01. Menlo Park: AAAI Press (2014).

Link to author pre-print

Link to presentation

AtkinsonLab.com by David J. Atkinson is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License