Stories

Opening up to AI: How participants experience AI-moderated interviews

Mar 5, 2024

By

Aaron Cannon

AI interview
AI interview
AI interview

Let's face it, artificial intelligence in research is a polarizing topic.

This is especially true for AI-moderated research, where AI leads conversations with real people. We’ve spoken to hundreds of researchers and the same questions comes up over and over: how do study participants feel about engaging by AI? Do they like it? Do they dislike it? Are they willing to share more than alternative methods? Less? What about sensitive topics? Are participants more or less willing to share?

We decided to conduct a study to answer these exact questions. We engaged a third party researcher to field the study using Outset. The objective? Determine the comfort, or lack thereof, that respondents feel interacting with an AI moderator.

(Note - this is not an academic study. This is a qualitative study to get a pulse of how people feel about interacting with AI, and the ‘why’ behind it.)

Below, we outline a few of the top-line findings. For the complete write up, download the full report at the end of this page!


Here's what we did

  • 98 US based participants between 24-70 recruited from the general population using a recruiting panel (Prolific.com).

  • Method: Participants took Outset AI interviews with Voice Response (where AI offers a question in text and participants respond in their voice, which is then transcribed).

    • Outset’s AI system dynamically probes deeper with follow up questions to participant answers depending on the thoroughness of the question either 0, 1, or 2 times per answer.

    • Outset also served a handful of multiple choice questions for participants to rate their experience with Outset

  • Interview structure

    1. Explore questions about participant’s financial and health goals to observe their willingness to share their stories and see how deep they’ll go

    2. Directly ask participants to reflect and rate their experience being interviewed by AI


Here's what we found

1. Participants were increasingly open and forthcoming with the AI moderator as it probed deeper into initial answers.

Participants were very open about their life situations. They offered personal stories, specific tactics, and future plans they held for themselves. The responses seemed to offer the kind of conversational output you’d expect from a qualitative interview. Here’s an example of how one participant spoke about their health-related goals:

" I'm gonna be very very blunt here. My motivation is not dying young. I'm only 39 and being this obese will only shorten my lifespan. So that's it. I don't want to die young and leave my daughter without a mother. "

We also analyzed the total volume of response from each participant in their initial answer to the financial and health goals questions as compared with what they said after AI dynamically probed deeper. We found that the dynamic follow-ups led to a 3.5x increase in total response volume participants (measured in total words transcribed). Of course volume isn’t always more quality, but it was noteworthy to see that intelligent probing questions led to disproportionately more said, like personal stories, specific goals/examples, and future plans.

Here’s a quick example that is emblematic of how participants responded to probing questions:

Initial answer: "I think my financial goals are to be able to live comfortably in 2024 where I don't have to worry about too much about bills, payments, and where I'll be able to have more excess cash or usable cash where I can do a lot more things and enjoy life."

Complete answer: "I think I plan to budget more, plan to write things down more, find exactly where I spent my money in terms of itemized, and then I plan to write what my goals are in a journal and make a plan to achieve those goals. I've thought about setting aside certain money for certain things, maybe like a travel fund, something for bill payments, and something else for like house payments, so basically categorize my spending habits, maybe in a spreadsheet. I think in terms of traveling, I'd like to set aside like $5,000 for traveling. And as for debts, I'd like to reduce my debt by $5,000. So two categories, travel fund and a debt fund, both $5,000 each."


2. Participants were largely comfortable being interviewed by AI, with some even forgetting it was AI leading the conversation

When asked directly about their experience, over 90% expressed agreement that 1) they were comfortable answering questions asked by AI, 2) they were open and honest when answering questions asked by AI, and 3) AI moderated research on Outset is more engaging than a survey. See the table of multiple choice answers below — click into the full report to see the ‘why’ behind all of these answers.

See qualitative breakdowns within each of the above questions in the full report.

Expressed benefits

When asked about the benefits of this survey method, participants talked a lot about the engaging conversational style and depth of dialogue (see graph below). They felt ‘listened to’, and felt that this was a lot more engaging than ‘filling in bubbles on a survey’ since the survey itself was responding to them.

Note - this graph represents AI-generated themes from open-ended questions and probes.

Interestingly, many also referenced the lack of inhibition they felt conversing with an AI interviewer rather than a human.

  • " I think that AI is more open. So, if I was speaking with a human, I think I would be judged… "

  • " I think I would be a little bit more intimidated if somebody…happened to be on the other side of the survey. "


3. There weren’t many drawbacks, but when asked, participants did mention a few disadvantages due to the lack of human presence.

Repetition

Some participants mentioned that the session was ultimately a bit more repetitive than they’d like. It was clear that AI erred towards getting deeper answers for all the questions, even when some of those questions were already answered.

No human feedback

A number of participants also pointed out that human feedback, whether explicit or implicit (e.g. facial expressions), can help direct participants and that was notably not present here. They mentioned getting cues that they’re giving the type of responses they want.

" The drawback is that I don't have a human to bounce off of. I can't see or judge whether they are understanding what I'm telling them. I can't tell whether you really understand what I'm saying, if you're getting the right ideas. With a human, I could go off with the way they look and sound, and they could give me confirmation, oh, I understand, things like that. "

Some participants clearly like the feeling they get from live human feedback. However, researchers could argue that if a moderator is giving the participant the impression that they’re on the right track, they’ve added some bias the study.


Wrap up

This post was just a taste! Check out the full report below.

These findings are certainly encouraging for Outset. It shows that our product can in fact deliver a better, deeper, and more comfortable participant experience over today’s best automated, scalable alternatives.

But these findings are also interesting at a more societal level. As we are all trying to figure out how AI and humans will (and should) interact, the results here give us a glimpse into some of the opportunities and questions ahead. Those of us building AI products often ask about how we’ll ‘meet users where they are’ to make AI more approachable — but given the results of this study, I think we may vastly underrate how far along the average person is in embracing (or at least tolerating) human/AI conversation.