The UK Ministry of Defence has recently published Part One of JSP 936, a directive focused on dependable AI in Defence, which I read with interest.
I imagined myself in the shoes of the Responsible AI Senior Officer (RAISO), the person identified in JSP 936 who will ensure that AI projects align with strategic objectives, ethical standards and governance requirements. What information and assurance would I need to sign off the risk of new AI solutions? How might Human Factors (HF) experts help to provide the detailed, human-centric focus necessary to effectively and safely integrate AI solutions into people’s workflows and operational settings.
While JSP 936 provides high-level ethical and technical guidelines for defence technologists, it does not delve deeply enough to turn AI policy into practical, human-ready solutions.
So, if you are a RAISO, then this post is for you.
Human Centricity
JSP 936’s emphasis on human-centricity aims to ensure that military personnel can confidently use AI systems in high-pressure situations. HF experts can help by offering deep insights into human behaviour, cognitive processes and the situational contexts that shape interactions between people and AI systems. HF expertise goes beyond the basics, exploring the complexities of human-AI collaboration, such as role changes and trust-evolution. This ensures AI systems complement human capabilities, avoiding friction, reducing cognitive overload, and minimising errors. By addressing these aspects, HF professionals can help make AI solutions safe, compliant and useful in real-world defence scenarios.
Reliability & Understanding
While JSP 936 stresses the dependability and reliability of AI capabilities, HF experts can build on traditional tech evaluations by conducting usability testing, simulations, and human-in-the-loop assessments to uncover and address issues at every stage of development. When transitioning AI into operational use, HF practitioners can create targeted training programs that empower defence personnel to interact with and maximise the potential of AI systems. Our user-focused approach enhances system performance and encourages widespread adoption.
Governance
Although JSP 936 outlines the need for governance and assurance, HF experts bring specialised skills to create adaptive systems that evolve based on user feedback and changing operational requirements. This means establishing processes for continuous monitoring, evaluation, and adaptation, ensuring AI systems stay effective and relevant while addressing unexpected human-AI interaction challenges.
Risk Identification & Mitigation
HF experts also explicitly tackle human-specific risks such as cognitive fatigue, decision-making errors and variability in responses under stress. By applying human factors methods and techniques, we can help to identify, mitigate, and adapt AI systems to account for these risks, promoting safe and effective human performance even in high-stakes defence environments.
Integration
AI technologies cannot function in a vacuum—they must integrate into existing operational frameworks and workflows. Human factors experts bring an in-depth understanding of human behaviour and team dynamics in different operational contexts within defence settings. We collaborate with technologists to design AI solutions that fit smoothly into established processes, minimising disruption and maximising effectiveness.
HF expertise delivers the depth, human-centric focus, and context-driven strategies needed to make AI solutions genuinely effective, trustworthy and fully integrated into defence operations.
Please reach out if you’d like to discuss how human factors experts can help bridge the gap between policy and practical application, ensuring AI technologies are technically sound and human-ready.