While Asimov’s Three Laws of Robotics are a lovely literary reference point, they fall short of addressing autonomy and agency within AI systems.
In this article, I propose four precepts, distilled from extensive research and dialogue, that offer more comprehensive guidance for designers of human machine teams.
Precepts for Human-Machine Teaming
1. Autonomy is a relationship, not a system property
Machine autonomy is not a simple system function; it embodies a dynamic relationship between humans and machines. Rather than a static attribute that can be manufactured, it is a privilege granted to machines by people when machines demonstrate the ability to operate without direct supervision or control. This recognition is particularly vital in environments where human intervention is impractical, such as remote or hazardous settings.
Current conceptualisations of autonomy often oversimplify its complexity by categorising it into discrete levels. However, autonomy is better understood as a multifaceted construct, encompassing a range of capabilities like initiative, independence, and the ability to gracefully hand off control. These are all designable features that contribute to the overall autonomy of a machine.
2. Teaming is inherently social
Teaming is inherently a social activity, encompassing the ability of individuals to collaborate effectively with one another, whether with other humans or with machines. As machines demonstrate the capacity to share goals and adapt to dynamic environments and as research suggests that people can form successful teams with robots, it becomes increasingly clear that the design of person-machine interactions must account for experiential, social, and cognitive-affective factors.
Failure to consider these elements in research and design efforts may impede the development of successful teaming dynamics, limiting machines to utilitarian tools rather than realising the full potential of collaborative person-machine teams.
3. Human vulnerability is immutable
Human vulnerability is an immutable aspect of our existence, underscoring the need for careful consideration in the design of human-autonomy interactions. Autonomous systems lack personal stake or motivation, making judgement a task best entrusted to those who are inherently vulnerable, who possess a sense of responsibility and are directly impacted by the consequences of judgement—people. People bear ultimate accountability for system performance, necessitating the ability to intervene, alter course, adjust levels of autonomy and assume manual control of automation when necessary.
The successful handover of decision-making authority from machines to people involves considerations such as timing, the adequacy of information provided and the receptiveness of the individual. Machines must not escalate issues in a manner that sets people up for failure, whether by providing insufficient time or information for resolution, or by involving individuals who are unavailable to receive the handoff. This means that designers must consider the full spectrum of activities in which the person might be engaged and must recognise the interconnectedness of human and machine activities within this collaborative framework, acknowledging that neither party operates in isolation.
4. Trust is learned, trustworthiness is earned
Trust is a dynamic process that evolves and changes over time. Trust is inherently context-dependent, varying based on learned cues of reliability and success in specific tasks; individuals may trust a particular entity for a specific task in one context but not in another. Therefore, providing insight into when, how, or why the system might fail can paradoxically enhance trust in autonomy, particularly when the system’s reliability fluctuates.
Autonomous systems are engineered to accomplish tasks aligned with human goals, However, unlike humans who adhere to rules to avoid harm, a machine’s lack of susceptibility to environmental hazards raises questions about rule adherence. Designers must consider whether machines should follow human-like rules to enhance predictability and trust or whether it should just get the job done. Similarly, individuals within human-machine teams must be able to assess the machine’s status and evaluate potential risks to their own vulnerability as a potential outcome of machine error. Designers must consider fostering trust through experiential learning across diverse contexts and scenarios.
Trust, or lack of trust in the machine component of a human machine team also impacts trust in the other individuals in a team. If the machine fails, humans start to question themselves and other team members as well as losing trust in the broken machine. Designers must incorporate methods that the team can use to rebuild and regain trust in each other as well as repairing trust in the machine.