Dressing Up AI: Using Role-play to Explore Human-AI Collaboration

With Halloween approaching, you might be thinking of a costume—perhaps a retro robot with blinking lights and a cardboard head? But while you’re deciding on your outfit, I’m putting on my own AI costume in the name of UX research!

Imagine a group of cybersecurity professionals and human factors experts all pretending to be AI agents in a serious role-play experiment. Sounds a bit like trick-or-treating in the office, right? But beneath the surface of this playful approach lies a novel way of getting to the meat and bones of human-AI collaboration. 

Ok, that’s enough of the Halloween references!

Recently, I used role-play workshops to explore how humans and AI can work together. I invited cyber security and machine learning experts to take on the persona of AI agents. Participants interacted with these AI personas to simulate real-world cybersecurity scenarios, in which they, the human analyst, interacted with an AI agent to deal with the situation.

Once people got over their initial inertia, the role-play method helped uncover insights that traditional lab-based experiments often miss. For instance, how people responded to AI limitations, information that allowed people to trust the agent, tasks where people preferred to take the lead or delegate responsibility and when they wanted to take it back and situations where providing additional contextual information proved useful or distracting. Participants were also able to communicate naturally, without being constrained by a designers view of what a user interface should look like.

Another advantage of role-play is that it encouraged creative thinking about AI’s limitations. For example, when AI agents were played by cyber security experts, participants were able to see how varying levels of AI autonomy and human control impacted trust and collaboration. These insights allowed us to tweak the AI systems to align more closely with human expectations and decision-making processes. However, when machine learning experts embodied the AI roles, they were forced to think from the machine’s perspective, revealing practical challenges, such as the agent’s inability to consider external context and the potential to confuse participants with too much detail about the AI model. These insights directly informed better AI design strategies, such as incorporating contextual cues and creating more adaptable AI models.

The role-playing workshops helped uncover usability challenges, trust issues and gaps between user expectations and AI behaviours early on in the design process, before we’d committed to developing technical prototypes. 

In role-play workshops, designers can experiment with varying levels of AI autonomy, observe user responses and refine interactions and decision-making mechanisms accordingly. Additionally, role-play allowed us to simulate complex, context-rich situations. I found it particularly useful to have people play the role from the machine, task/scenario and expert user’s perspective, ensuring that we captured a wide range of challenges. Our aim is to build AI systems that are more adaptable, intuitive and effective when deployed in real-world settings. 

So, if you’re serious about improving AI collaboration, perhaps it’s time to put on your retro robot suit and step into the shoes (or circuits) of an AI agent. Don’t worry; dressing up is optional!

Leave a Reply

Your email address will not be published. Required fields are marked *