Me, Myself & AI

Sparked by interest in ChatGPT, I’ve been using Large Language Models (LLMs) since January 2023. So far I’ve not been able to automate myself out of any part of the research process.

Human-AI teaming is my passion. I’m intrigued and excited by the claims made by AI-powered tools for UX research, like:

  • “We’ll do your research analysis for you! No need to analyse or tag your data.”
  • “AI has no bias! Reduce bias in your research!”

But in my experience LLMs cannot do my research analysis for me and may actually accentuate systemic bias. Here’s an account of my experience of using LLMs to enhance my UX practice:

As my co-writer

My phd supervisor once commented that I wrote my draft thesis in Helen-ese; explaining complex concepts in sentences with convoluted structures. That’s how my brain worked at the time. Like a smorgasbord of ideas rather than an expertly curated menu of actionable insights.

Over the last 20 years I’ve improved writing for other people but if LLMs had been around back in my early career they would have vastly reduced the time my supervisor spent proof-reading and asking questions. LLMs don’t tend to write ambitious sentences with lots of nested clauses that probably should be bullet points or different paragraphs. They help me with clarity and legibility. 

However, LLMs are designed to speak like experts but they are not experts themselves so I have to be mindful of blindly accepting suggestions. It’s happened that the writing or re-writing of text by LLMs has introduced incorrect but confident statements into my paragraphs that I have had to remove. Highlighting LLMs mistakes doesn’t actually make them less likely to make errors next time round either.

So, you should expect to proof-read your LLMs suggestions, include your own transitions, remove redundancies, redo intros and conclusions and add your own personality.  

As my co-researcher

As a kid, I loved the computer on the Starship Enterprise helping science officers solve problems. That’s how I feel when I’m interacting with ChatGPT. Prompting and refining. Just like Dr Beverly Crusher would do when analysing an alien sample. 

LLMs can help with knowledge gathering and understanding by summarising long papers, but I’ve been caught out a few times with this when chatGPT has left out key concepts or points that I would have identified as important. As the expert, I know what to highlight and what to avoid far better than a model that may not be familiar with a subject.

Now, instead of asking it to summarise a whole paper, I might ask for the author’s perspective on a topic or for the LLM to provide me with the main insights on several topics that I’ve gleaned from the abstract. I find that I don’t have to be as exact with rules and language as I would have to be when searching a database as LLMs will include implied or associated terms too. I’ve got better results this way.

To help me understand end users

There is no substitute for direct observation and research to help identify and understand human factors. 

To give me a head start, I’ve used LLMs to streamline my process when I’m unfamiliar with a work domain because instead of wasting precious time asking users to clarify the environment they work in, the language and tools they use and other details about what they do, LLMs are able to articulate information that is already collectively known about basic role archetypes. 

I can then focus on uncovering their individual experiences, unconscious needs, intangible problems and exploring interesting opportunities and solutions with end users.

To play the role of an end user

Using prompt engineering, ChatGPT has acted like an end user to help me to refine my probe questions, work out conversational flows and to fill in gaps that I might have otherwise missed when defining my research protocols. ChatGPT is good at coming up with alternative scenarios, questions and even answers. 

I’ve got ChatGPT to interview itself, acting both my role as behavioural scientist and the role of the research participant. So far I’ve not been able to completely replace myself as the interviewer / experimenter but I’d really like to use LLMs as a force multiplayer when conducting user interviews. 

In my early experiments a colleague commented that he was caught off guard and found himself opening up to my AI interviewer, so time will tell.

To create user persona

I’ve asked LLMs to generate user persona based on roles and characteristics. On the surface they seem reasonable and capture the sort of things you’d expect. However, on reflection, persona seemed stereotypical and potentially gender and race biassed.

ChatGPT couldn’t explain why it created persona like that (kind of like how we aren’t always aware of our unconscious bias) but it did say that it was up to me to ensure my data collection wasn’t perpetuating harmful social biases. 

I’m not a huge fan of persona anyway; I prefer modes and mindsets which can apply to anyone and to create those I need real observations and data points from end users. The insights we don’t expect are the ones that prove to be the most valuable to product and service designers.

For grounded theory

I don’t use ChatGPT for any of my protected primary sources but data protection isn’t an issue then ChatGPT is brilliant for qualitative research. 

It can process lengthy transcripts from workshop reports available on the web much quicker than I can read them. I’ve used it to pick out themes, problems, opportunities, sentiment and quotes from open source qualitative data, and then to count the number of times a particular theme is mentioned to get a feel for its importance. 

It will also tell you when the text doesn’t refer to a particular concept, which is also useful as the absence of data where you’d expect data is in itself data. 

As I’ve said before, LLMs are not experts but it’s a useful co-researcher to help with purposeful and theoretical sampling.

Support not substitution

Bear in mind that you are the expert. Many LLMs have not been trained on academic literature and won’t think for you. However, for supporting the lone researcher they are really useful. I hope hearing about my experience was useful to you too. By the way, I wrote this story without help from ChatGPT, can you tell?

Leave a Reply

Your email address will not be published. Required fields are marked *