Cybersecurity and Autonomous Systems

The technology of intelligent, autonomous machines gives rise to novel fault modes that are not seen in other types of automation. As a consequence, autonomous systems provide new vectors for cyber-attack with the potential consequences of subversion, degraded behavior or outright failure of the autonomous system. 

Maladaptive behavior and the other symptoms of these fault modes in some cases may resemble those found in humans. The term “psychopathology” is applied to fault modes of the human mind, but as yet we have no equivalent area of study for intelligent, autonomous machines. This area requires further study in order to document and explain the symptoms of unique faults in intelligent systems, whether they occur in nominal conditions or as a result of an outside, purposeful attack. 

By analyzing algorithms, architectures and what can go wrong with autonomous machines, we may 

  • gain insight into mechanisms of intelligence; 
  • learn how to design out, work around or otherwise mitigate these new failure modes; 
  • identify potential new cyber-security risks; 
  • increase the trustworthiness of machine intelligence. Vigilance and attention management mechanisms are identified as specific areas of risk.

The publication of this research led to participation by Atkinson in a UNIDIR Expert Meeting on "Autonomous Weapons, Cybersecurity, and AI” convened by the United Nations Institute for Disarmament Research (UNIDIR), Geneva in November 2015. (See "The Weaponization of Increasingly Autonomous Technologies”)

Atkinson, David J. 2015. Emerging Cyber-Security Issues of Autonomy and the Psychopathology of Intelligent Machines. In Foundations of Autonomy and Its (Cyber) Threats, Papers from the 2015 AAAI Spring Symposium. Technical Report No.. SS-15-01. Menlo Park, CA: AAAI Press.

Link to author pre-print 

Link to symposia presentation

AtkinsonLab.com by David J. Atkinson is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License