WELCOME
My research targets applications of intelligent, autonomous agents as partners to humans in joint activity, with an emphasis on dynamic interdependency and between human and agents, shared awareness, smooth exchange of control and variable autonomy. Topics in human-machine trust and social interaction figure prominently. I am also interested in emerging fault modes and cyber-security vulnerabilities of intelligent, autonomous systems.
The activities of the AtkinsonLab center around the the conflict that arises from two major research goals:
- Give machines ever greater intelligence and autonomy
- Maintain control of those machines
The premise of our research is that we need a new kind of relationship with increasingly intelligent, autonomous machines that allows for greater insight, coordination, initiative, and thoughtful delegation. In other words, we must begin to think of intelligent machines as partners rather than tools. A key challenge for effective teamwork with machines is bilateral human-machine trust: how it is established, maintained, modified by situations, and changed by experience. Without trust, the full potential of human-machine joint work cannot be achieved.
From a systems engineering point of view, the purpose of trust in a multi-agent system composed of human and ma- chine elements is to achieve optimal overall performance via appropriate interdependency, mutual reliance, and appropri- ate exchange of initiative and control between the human and machine cognitive components of the overall system.
MOTIVATION
Increasingly intelligent, autonomous agents are becoming cognitive entities. The technical trends in artificial intelligence point towards convergence of the characteristics listed below. Each one of these is changing the nature of human-machine interaction and how we think of and use artificial intelligence now and in the years to come. It is imperative that we design, create and use the foundational technologies with forethought about the potential impact on work and society. My stance is to focus on technology for human augmentation, not substitution, and to do so with eyes open for the day when intelligent agents eventually become our near-peers.
Consider:
- Intelligent agents have goals that we give them, and goals of their own.
- Intelligent agents have beliefs about themselves, the world, and others.
- Intelligent agents sense and interpret the world, reason in many different ways about their beliefs, and act purposefully to achieve goals.
- Intelligent agents learn and adapt.
- Intelligent agents interact with humans, perform significant actions in the world, communicate in a variety of ways, and are beginning to act socially.
DEFINITIONS
Autonomy, Agents, Trust, and more
RECENT RESEARCH TOPICS