Why aren’t people adopting your amazing AI-powered idea?

I was asked to find out why the net promoter scores of a live AI-powered intelligence processing system were really low in comparison to its beta version, which had received amazing reviews from the people using it. It got me thinking about why AI might fail in the real world and I want to share my thoughts with you.

Perceived ease of use and usefulness are critical influencers of AI adoption

People need to grasp how AI solutions can streamline their processes and improve outcomes. In lab tests, people experienced how we automated repetitive tasks and designed interactions around their goals and tasks, highlighting key information to help them make decisions faster. Communicating the practical benefits and demonstrating user-friendly interfaces can positively influence their decision to embrace AI technologies. The participants in our lab based study were hungry to use the live version!

There’s disconnect between the lab and real world

The AI algorithms were developed in an academic setting without sufficient understanding of the real-world.

In the example I mentioned above, the lab demo was a textbook example. Predictions, recommendations and other outputs were displayed with an associated degree of confidence, allowing individuals to calibrate their trust. They loved it! Yet in the real world people complained that the AI returned spurious results and preferred to use their old inefficient methods and tools instead.


“I can see it says it’s 99.9% confident in this choice, but I know it’s wrong, so I can’t trust it”.


With the help of machine-learning specialists, I discovered that the live system users were occasionally feeding in data that was very different to the data the AI had trained on, to which it confidently returned inaccurate results. 

Since monitoring data drift can be automated, I designed a new service to communicate various types of drift a) to provide end users with more tools to assess and calibrate their trust in AI and b) to signal to engineers that a new model might be required, or the AI might need retraining on a new dataset.

Bridging the gap between developers and end users of AI led to a better understanding of real-world requirements and enabled us to tailor the technology. Transparency in AI algorithms’ operations and data handling can go a long way in building trust!

We all need a structured use case

Another thing that we lacked was a structured use case for implementation in the real-world setting and a detailed narrative of how the AI algorithms should integrate into people’s usual workflows. Involving them earlier in defining the use case could have helped us align the AI algorithms with their needs and could have helped shape parameters for training, testing, and validation. We can’t just leave it all to engineers, the people using the systems must be educated and empowered to make decisions about AI performance as well.

To win back the trust of the people who lost confidence in the live system, we emphasised the expertise and credibility of the development team, we discussed the logic of AI algorithms, what data drift is and how it impacts results, as well as involving them in designs for real-time monitoring.

The experience of being closely involved in development elicited positive emotions in those who had previously mistrusted the results and gave them confidence that adopting the AI system would be beneficial.

Sometimes people just prefer a human touch

AI is exciting and potentially game-changing but it isn’t a solution for everything. 

AI-based recommendations may be useful for practical products but we actually prefer human recommendations for experiential and sensory things1.

So consider the points above and if you’re still wondering why people aren’t adopting your AI-powered experience then get in touch with me and we’ll find out.

  1. Longoni, C., & Cian, L. (2020). Artificial Intelligence in Utilitarian vs. Hedonic Contexts: The “Word-of-Machine” Effect. Journal of Marketing, 0022242920957347.the people using it

The image of the confused cute robot was generated using https://hotpot.ai/

Leave a Reply

Your email address will not be published. Required fields are marked *