
Workplace Insights by Adrie van der Luijt
AI is both rubbish and brilliant at user research. It’s a walking contradiction that can either save your bacon or completely destroy your credibility. And most professionals aren’t talking honestly about this paradox.
I’ve been experimenting with AI in user research practice for the past year, and I’ve got some observations worth sharing. If you’re after a corporate puff piece about how AI will magically solve all your research problems, you’re in the wrong place.
AI makes things up. Constantly. In user research, where accuracy is your entire currency, this is problematic to say the least.
When I used AI to improve my portfolio website, one tool confidently wrote that I had started my career editing the Wall Street Journal. Impressive credentials, if only they were true! This isn’t rare; it’s the default behaviour. AI models are designed to produce plausible-sounding outputs, not factually correct ones.
The most dangerous aspect is how confidently wrong they can be. At least when a human colleague talks nonsense, there are usually some telltale signs: hesitation, qualification or the ability to admit uncertainty. AI in user research delivers falsehoods with the same authoritative tone it delivers facts.
As a content designer who’s worked on websites from Discovery through to Go-live, I know the importance of evidence-based design decisions. The GOV.UK assessment process demands solid evidence from user research behind every design choice – and rightly so.
When I wrote the Universal Credit website, we avoided definitions except for one critical term: “partner”. In user research, we discovered something surprising: many people thought “Do you have a partner?” was asking about sexual orientation because straight people apparently often don’t use the word “partner”. Without that user research, we would have created ambiguous content that failed users at a crucial moment.
Similarly, at the Environment Agency, we replaced an unwieldy 350-column spreadsheet with an online application process for flood defence funding. Our initial assumption was that accuracy was paramount. User research revealed something entirely different. Applications were routinely amended throughout the year before final submission and everyone understood initial figures were rough estimates.
Could AI in user research have uncovered these insights? Probably not independently. But it could have generated testing scenarios that included the possibility that information was accepted as an estimate, giving us more diverse hypotheses to test.
Despite its flaws, I’ve found specific contexts where AI in user research genuinely enhances the process:
First, AI in user research excels at processing vast amounts of unstructured data quickly. It can analyse hundreds of survey responses and identify initial patterns that I then verify manually.
Second, AI helps generate diverse testing scenarios when you’re too close to your own product. When you’ve been working on something for months, you develop blindspots. AI can suggest user journeys or edge cases you hadn’t considered.
Third, AI is surprisingly good at real-time note-taking during user interviews. I’ve experimented with having AI transcribe and summarise interviews as they happen, freeing me to focus entirely on the participant rather than frantically scribbling notes.
Perhaps most valuably, AI in user research can help track design decisions and their underlying evidence from user research. After hundreds of hours of user research sessions, keeping a clear record of which findings influenced which decisions becomes challenging. AI can help organise this evidence trail, making it easier to justify decisions during assessments.
But here’s the crucial bit: in none of these scenarios am I fully delegating my professional judgement to the AI.
After much trial and error, I’ve developed a simple framework for using AI in user research that minimises the risks:
Use AI to augment your capabilities, not replace your expertise. The best results come when AI handles the mechanical aspects, freeing you to apply your human judgement and empathy.
Consider the ethical implications of using AI with your participants’ data. Have you obtained informed consent? Are you storing and processing data securely?
The honest truth is that AI will transform user research, but not in the way many tech evangelists predict. The value of skilled researchers won’t diminish, it will increase.
What excites me most is how AI in user research might democratise certain aspects of research. Teams with limited resources might use AI to conduct preliminary research that would otherwise be impossible. But this will only work if we’re honest about what AI can and cannot do.
AI won’t replace researchers, but researchers who know how to use AI responsibly may well replace those who don’t.
The question isn’t whether to use AI in user research, but how to use it without compromising the integrity that makes our research valuable in the first place. Because at the end of the day, no AI can replace the human insight that recognises when people think you’re asking about their sexuality when you’re really asking about their household.