The case for minimising anthropomorphism in AI systems

I used to advocate incorporating anthropomorphic features into user interfaces and human-AI interactions as a way to enhance attributions of agency and foster trust and reliance behaviour in human-machine teams.

Recently however, my perspective has shifted; I think it’s essential to minimise anthropomorphism in the design of interactions with AI systems that use natural language processing and conversational user interfaces. Here’s why:

Lost in Translation

Unlike physical capabilities (e.g., walking or grasping), the salient features of interpersonal interactions are frequently lost in translation from human-human teams to human-machine teams.

The essential elements of human teamwork, such as recognition of personhood, sensitivity to vulnerability, empathy, intuitive understanding and the ability to negotiate and resolve conflicts, are typically lost. These are the qualities that are essential for building trust and effective collaboration in human teams but they very difficult to recreate and so are often lacking in human-machine teams. 

Anthropomorphic features, such as voice production and simulated eye contact and turn-taking in conversations, are often recreated in machines. However, these features that superficially emulate human qualities can lead to inappropriate expectations of machine capabilities and mistaken attributions of agency.

Coupled with automation bias, which is our tendency to view computers as impartial arbiters of truth, our tendency to attribute excessive agency to machines has complicated discussions around reliability, safety, and ethics.

It appears to me that these biases have transformed conversations about machine reliability and safety into existential debates about whether machines possess some form of consciousness or agency beyond their programmed functions.

The Ghost in the Machine

I enjoy a good debate, so in the spirit of the philosophical perspectives offered by Locke and Singer, in my opinion machines that currently lack genuine self-awareness and subjective experiences cannot be considered persons. Despite some having the ability to distinguish humans from other species, these machines do not possess the capacity to recognise or exhibit personhood. Even the most advanced AI systems today fall short of exhibiting true rationality and autonomy and do not possess independent moral agency. They remain, fundamentally, tools created and controlled by people.

Similarly, a person is not merely a tool or cog in a system, providing inputs that lead to outputs via an interface. The direct comparison of humans with technology is a false equivalence, suggesting that humans and machines can be treated similarly, but interactions between people and machines teams cannot be directly equated with interactions between people. For an analysis of the divergence between teams of humans and teams of humans and machines see my research notes here.

I’m not a speciesist when it comes to self-awareness. Following the perspective of Daniel Dennett, who ascribes consciousness to any system capable of performing requisite functions, regardless of whether the substrate is biological or artificial, the potential for AGI may merit thoughtful consideration in the future. Presently however, machines are not sentient beings that require rights, protection from harm, and other ethical considerations. So, let’s be mindful of mitigating our natural cognitive biases and minimise anthropomorphism for the time being so we might focus our attention on the more urgent needs of designing reliability and safety into our AI systems.

Leave a Reply

Your email address will not be published. Required fields are marked *