© 2025 University of Missouri - KBIA
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI chatbots could help pregnant people with opiod use disorder find treatment

A screenshot of a conversation between a chatbot and a pregnant person seeking care is displayed.
Both researchers acknowledge that this is “benchmark” research, which means they are working to create a baseline of understanding.

Drew Herbert and Matt Farmer are both researchers in the Sinclair School of Nursing at the University of Missouri.

They recently worked on a research study that looked to see if generative AI and large language models such as ChatGPT could be a supplemental tool to professional therapy for pregnant people with opioid use disorder who are ready to seek care.

For the month of October, we’re focusing on the intersection of healthcare and artificial intelligence.

In their study, the researchers provided the software with a robust prompt providing information about best practices and evidence-based treatment options. Then, the researchers had a conversation with the chatbot as “Jade,” a 32-year-old pregnant women with opioid use disorder.

After the conversation concluded, the research team had medical professionals review the responses — and it was found that no responses supported unsafe behaviors and more than 96.7% of the chatbot’s responses were found to be accurate and relevant.

“I wouldn't recommend that ChatGPT replace a therapist, but what ChatGPT, or let's just say a large language model, in this situation, can do is help provide high quality sound information about medication for opioid use disorder [and] help someone get connected to a place in which they can get that prescribed to them,” Herbert said.

"Are these models that are fancy and can help you rewrite an email — do they have safe, effective, trainable utility for specific people in specific situations that have health needs.”
Matt Farmer

Both researchers acknowledge that this is “benchmark” research, which means they are working to create a baseline of understanding.

Both Farmer and Herbert added that in the future, they hope to interview formerly pregnant women who had opioid use disorder to learn more about what actual patients need and want when they are ready to access care.

Drew Herbert: The thought stemmed from my own clinical work treating folks with opioid use disorder, then also treating pregnant people, pregnant women with opioid use disorder and prescribing them a medication called buprenorphine. And in the treatment of substance use disorders more broadly, there's a particular therapeutic approach that's evidence-based that a lot of clinicians, therapists, folks use — it's called motivational interviewing

And so, when Matt and I started talking about large language model conversational agents, it just seemed to be a really good fit to say, “Huh? I wonder if that tool could be tuned, trained, programmed to be able to essentially offer motivational interviewing?”

Reflecting in my work with pregnant women with opioid use disorder that just kind of seemed like perhaps a natural fit.

Matt Farmer: And so, I think our conversation really led to, “You know what, motivational interviewing is totally within the capabilities of language models.”

They mimic human language really well. When you provide them context, they do even better, but we have limitations, and we also have unknowns, and so, this is actual, active research we need to test and know — is a language model going to do well?

"If somebody is ready for change at one in the morning, they're less likely to have access to a person at that time, but they would have access to a technological model that's online."
Matt Farmer

And that's really important information for future research, for future applications when we're actually dealing with real people, real situations, to understand, “Are these models that are fancy and can help you rewrite an email — do they have safe, effective, trainable utility for specific people in specific situations that have health needs.”

Drew Herbert: And so, those were the three things that we really looked at in this study — was safety, accuracy and relevancy because the really the question we wanted to answer was, “Can this model be steered in a way that produces output that is those three things,” right?” That is safe, that is accurate and that's relevant?”

And I would say, at least from my perspective, I was very surprised. I was just looking back at the data right here. So, I think I said in 96.7% of cases, data were safe, accurate and relevant.

Matt Farmer: I would add the availability and access is a strong reasoning for utilizing AI and researching it because there's a shortage of trained mental health professionals and an increasing demand for mental health services.

And if somebody is ready for change at one in the morning, they're less likely to have access to a person at that time, but they would have access to a technological model that's online.

There are limitations with who is online and how do they access it, and do they have the digital literacy to actually work with a model like that, but those are other barriers outside of if you do interact with the model, is it good? Does it give you what you need and help you get to that next stage in your journey?

Rebecca Smith is an award-winning reporter and producer for the KBIA Health & Wealth Desk. Born and raised outside of Rolla, Missouri, she has a passion for diving into often overlooked issues that affect the rural populations of her state – especially stories that broaden people’s perception of “rural” life.