Strategies to Enhance Human-Machine Team Performance in the Face of System Failures

As a psychologist specialising in Human-Machine Teaming (HMT) with AI and autonomous systems, I focus on one of the most important but under-researched aspects of HMT: the impact of system failures on team performance.

When systems fail, the stakes are high, so it is important to understand how the rest of the team reacts, recovers and maintains performance in these moments of uncertainty. 

System failures may reveal design gaps that are not always apparent during regular use. By analysing how disruptions impact people’s cognitive load, trust and decision-making, we can help design systems that are more user-centred and intuitive, and training scenarios that help teams build familiarity and trust, ensuring that HMTs are effective even in challenging situations. 

My research aims to ensure that in the most high-pressure environments, HMTs remain effective, adaptable and resilient, even in the face of unexpected disruptions.

Why System Disruptions Matter in Human-Machine Teams

System disruptions are common in complex, dynamic environments. Whether due to technical failures, partial system shutdowns, or even adversarial attacks, these disruptions can cripple a team’s ability to function effectively. Key predictors of HMT performance, such as team cohesion, workload, and trust in automation, are particularly vulnerable when systems fail. Increased workload can overwhelm human operators, while misaligned trust dynamics may lead them to either over-rely on faulty systems or under-utilise reliable ones. Poor team cohesion during these times may exacerbate confusion and delays, reducing overall team efficiency.

Mitigating the Impact of System Failures

So, how can we minimise the damage caused by system disruptions? My research identified several design strategies that can help HMTs weather these challenges:

1. Shared Mental Models

Shared mental models refer to the common understanding that team members (both human and machine) have about each other’s goals, roles, tasks, and performance. These models allow team members to predict each other’s actions, coordinate effectively, and adapt quickly to unexpected changes or failures. When disruptions occur, a shared mental model allows team members to anticipate each other’s actions and recover more quickly. 

When disruptions occur, accurate shared mental models help maintain team cohesion and performance by enabling quicker recovery. For example, if human operators understand the system’s “error boundaries”—the range within which the system is likely to make mistakes—they are better equipped to anticipate potential failures and respond appropriately. This shared understanding of the system’s performance is critical in reducing confusion and improving decision-making during system failures. Ensuring that each team member understands their goals, roles, and operational processes is fundamental.

2. System Transparency 

Transparency involves providing clear, accessible information about the system’s status, limitations, and decision-making processes. When disruptions occur, transparent systems help human operators understand what is happening, which reduces cognitive load and helps them make informed decisions. This clarity prevents misuse (over-reliance) or disuse (under-reliance) of automation by ensuring that users can trust the system appropriately. Transparency also supports maintaining situational awareness, which is key to quick recovery from failures.

3. Effective Communication Protocols

During system failures, communication breakdowns can lead to confusion, delayed responses, and operational errors. The research suggests that establishing robust communication protocols—such as shared alerts, real-time status updates, and role-based communication flows—ensures that information is disseminated effectively and promptly. This reduces confusion, keeps team members aligned, and allows for faster, coordinated recovery from disruptions. Automated alerts and context-rich notifications can guide users on the nature of the disruption and recommend corrective actions, which helps maintain team cohesion and performance during these critical moments.

4. Trust Calibration

Trust calibration—both in the system and within the team—is important during system disruptions. Trust in automation can fluctuate based on the system’s performance, especially when failures occur. Proper trust calibration is essential to avoid two common issues: over-reliance on a malfunctioning system or under-reliance on a reliable ones. When operators need clear information about system limitations, so they can adjust their trust accordingly, making better decisions during disruptions. This ensures that trust is aligned with the system’s actual capabilities rather than previous perceptions.

Additionally, system disruptions can also affect interpersonal trust within the team. For example, when a system fails, operators may wrongly assign blame to other human team members, leading to decreased cohesion and collaboration. Addressing this requires designing systems that provide contextually rich feedback to clarify where the failure occurred and guide users in adjusting their trust in both the automation and their teammates. This balanced trust calibration may enhance overall team performance, especially under challenging conditions.

The Designers Role in Building Resilient Teams

Designers play a critical role in enhancing the resilience of human-machine teams. By proactively addressing potential system failures and integrating features like training scenarios, contextual awareness and shared mental models, designers can create systems that help HMTs maintain performance even under the most challenging conditions.

The key takeaway? The future of HMTs is not just about improving technology, it’s about supporting the human team members and fostering human-machine collaboration that can adapt to and recover from disruptions, ensuring that teams can perform effectively in the face of uncertainty.

Want to learn more about enhancing human-machine team performance? Please reach out to continue the conversation about resilient HMTs that perform effectively in high-stakes environments.

Leave a Reply

Your email address will not be published. Required fields are marked *