The case for minimising anthropomorphism in AI systems

In the past, I would have recommended incorporating anthropomorphic features as a way to enhance attributions of agency and foster trust and reliance behaviour in human-machine teams. However, my perspective has shifted; presently minimising anthropomorphism is essential in the design and interaction with AI systems that use natural language processing and conversational user interfaces. 

Lost in Translation

The translation of physical capabilities (e.g., walking or grasping) is relatively mechanistic and straightforward. In contrast, the salient features of interpersonal interactions are frequently lost in translation from human-human teams to human-machine teams. Sensitivity to vulnerability and the recognition of personhood are often absent in machine interactions. Additionally, essential elements of human teamwork, such as empathy, intuitive understanding and the ability to negotiate and resolve conflicts, are typically lost. These qualities are crucial for building trust and effective collaboration in human teams but are lacking in human-machine teams. 

Anthropomorphic features, such as voice production and simulated eye contact and turn-taking in conversations, are often retained in machines. However, these features that superficially emulate human qualities can lead to inappropriate expectations of machine capabilities and mistaken attributions of agency.

Coupled with automation bias, our tendency to view computers as impartial arbiters of truth, our tendency to attribute excessive agency to machines has complicated discussions around reliability, safety, and ethics. These biases have transformed conversations about machine reliability and safety into existential debates about whether machines possess some form of consciousness or agency beyond their programmed functions.

The Ghost in the Machine

In the spirit of the philosophical perspectives offered by Locke, Singer, and Strawson, machines that currently lack genuine self-awareness and subjective experiences cannot be considered persons. Despite their ability to distinguish humans from other species, these machines do not possess the capacity to recognise or exhibit personhood. Even the most advanced AI systems today fall short of exhibiting true rationality and autonomy and do not possess independent moral agency. They remain, fundamentally, tools created and controlled by people.

I embrace the concept of artificial general intelligence (AGI); I’m not a speciesist when it comes to self-awareness. Following the perspective of Daniel Dennett, who ascribes consciousness to any system capable of performing requisite functions, regardless of whether the substrate is biological or artificial, I believe the potential for AGI will merit thoughtful consideration in the future. Presently, machines are not sentient beings that require rights, protection from harm, and other ethical considerations. 

The direct comparison of humans with technology constitutes a false equivalence, suggesting that humans and machines can be treated similarly. However, a person is not merely a tool or cog in a system, providing inputs that lead to outputs via an interface. Furthermore, interactions between teams of people cannot be directly equated with interactions between people and machines. 

Leave a Reply

Your email address will not be published. Required fields are marked *