Inclusion in AI research

Image depicting the digital divide

Lately, my conscience has been whispering that in a democratic society, decisions about AI must involve everybody. 

So, I was drawn to Joseph Rowntree Foundation’s recently launched work stream ‘AI for public good’1 and attended “The Power of AI Narratives” webinar to explore the narratives surrounding AI. 

Three things made me think about inclusivity in my own practice: data ownership, digital divide and decisions about AI.

Data owned by tech companies

JRF researchers articulated that data is at the heart of the issue with AI. While individuals may own their personal data, valuable insights arising from the connections between people remain outside individual ownership. Tech companies claim and exploit this shared data for their benefit. Additionally, JRF talked about the practice of using low-paid workers in “digital sweatshops” to process data and train AI models, that even global enterprises employ, which raises concerns about labour exploitation and brings to my mind disturbing thoughts about old colonial practices.

Digital divide excluding those who might be harmed most by AI

JRF researchers explained how, in an increasingly digitised world, the concept of the “digital divide” needs to be recognised. A significant portion of the population feels excluded from the potential advantages of AI. Thousands still feel left behind, particularly those who may actually be harmed by the integration of AI in public services. The digitally illiterate, elderly, disabled, and young individuals, often plagued by fear and low confidence, might actually find themselves at a disadvantage in a digitally-driven world.

I found this troubling! I believe that AI must enhance human experiences, not make it better for some and worse for others.

Decisions about AI made by technologists

The researchers at JRF explained that decisions about AI are primarily made by directors, designers and developers of AI rather than through democratic processes and public input. 

As digital transformation accelerates, the individuals and communities who are not included may be at risk of being marginalised or exploited.

As a Chartered Psychologist I’m committed to upholding ethical, lawful codes of conduct to prioritise protection of the public. My own research practice is people focussed, so I always consider multiple viewpoints, values, circumstances and mindsets and work closely with different groups of people to co-create solutions that address their specific needs and concerns.

However, for me the webinar raised several questions: 

  • Are all technologists similarly bound by strict ethical codes? 
  • Do all tech companies engage in high quality user research?
  • Are people asking why AI? Who stands to benefit?

I also asked myself whether I’m truly doing enough to ensure that my research covers all people impacted by the technology and whether solutions really empower people with varying levels of digital literacy, physical and cognitive abilities, and technological access.

My take away from the webinar is that it’s so important for us to navigate this period without inadvertently widening the digital divide or unintentionally engaging in harmful practices. While I’m enthusiastic about ambitious technology visions, I’m equally committed to maintaining an ethical balance.

I may not have all the answers, but I firmly believe in striving for equitable distribution of AI benefits. So, I’m eager to explore ways to enhance the inclusivity of my own professional practice and keen to join ethical committees to facilitate democratic decision-making regarding AI’s societal impact. Please get in touch!

 

  1. https://www.jrf.org.uk/ai-for-public-good/ai-and-the-power-of-narratives

Leave a Reply

Your email address will not be published. Required fields are marked *