Human Social Interface for Autonomous Agents

Our central hypothesis is that the cognitive and affective nature of human interpersonal trust provides useful guidance for the design and development of autonomous agents that engender appropriate human-machine reliance and interdependency, specifically, via correct understanding and use of what we term the “human social interface.” We proceed based on the results of a body of research that concludes humans are predisposed to anthropomorphize machine behavior and, indeed, cannot avoid doing so. Historically, this has resulted in human errors of judgment, in appropriate expectations, poorly calibrated trust and inappropriate reliance. The idea behind this research is that human anthropomorphism of machines is a feature, not a bug.

In a departure from traditional psychological studies of human social interaction, we have adopted an engineering ethic: Let’s build it. In particular, we conceive of social interaction in terms of an engineering interface between two systems. A good interface specification details at least:

  • Assumptions about the systems on either side of the interface, 
  • The collection of communicative signals the may be transmitted across the interface,
  • The channels that carry communicative signals,
  • The protocols to be used by these systems that use signals individually and in combination, and in what conditions,
  • How the state of the systems on either side of the interface are modulated as a result of bi-directional communication.

Link: Examples of Signals, Channels and Protocols for Social Interface Engineering Specifications

We construe that “state of the human system” is a structure of beliefs, dispositions and intentions, and those of specific interest with respect to trust include causal factors, attitudes and evaluations centered around other agents, situations, goals and tasks. The first portion of our exploratory research aimed to elicit and describe these factors specifically with respect to intelligent autonomous systems.

Our claim is the specific qualities of intelligent, autonomous systems (both overt and inferred) may be functionally equivalent to analogous human qualities (for purposes of trust) when those qualities:

  • Are well defined and accurately measured,
  • Appropriately communicated or otherwise “portrayed” in a manner that is compliant with human social interaction,
  • Evoke and excerise appropriate human cognitive and affective evaluation processes.

This enables more accurate human assessment of an agent, and leads to better calibrated trust and reliance.

Atkinson, D.J., and Clark, M. Autonomous Agents and Human Interpersonal Trust: Can We Engineer a Human-Machine Social Interface for Trust. In Trust and Autonomous Systems: Papers from the 2013 AAAI Spring Symposium. Technical Report No. SS-13-07. Menlo Park: AAAI Press (2013).

Link to 2013 AAAI SS author pre-print

Link to 2013 AAAI SS presentation

Link to Tulane lecture presentation

AtkinsonLab.com by David J. Atkinson is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License