The use of AI in hospitals in the United States is wide ranging, with roughly two-thirds of U.S. hospitals using these predictive algorithms. But did you know that only about 60% of those hospitals are testing these algorithms for accuracy, and less than half are testing them for bias?
So, The Checkup’s question is: How can hospitals’ use of AI affect patients?
Side Effects Public Media’s Community Engagement Specialist Lizzy McGrevy spoke with Ryan Levi about that. Levi is a producer with Tradeoffs, a health policy news organization.
This transcript has been edited for length, style and clarity.
Lizzy McGrevy: So what do we know about how and why hospitals are using AI?
Ryan Levi: What we're talking about right now really comes from some new research from Paige Nong at the University of Minnesota and others, and it's the first national look at how hospitals are using AI.
When we look at actually how hospitals are using these predictive models, this AI, the most common way is to predict health trajectories or risks for their patients in the hospital. So, like predicting the fall risk for someone when they're hospitalized, or predicting if someone's going to get sepsis.
The second most common way that they're using AI is to identify high risk folks who have left the hospital who may need to follow up care. So, if someone is at risk for an infection, they want to be able to predict that risk and intervene early.
And the third most common way that hospitals are using AI is actually kind of administrative stuff — scheduling, billing, the more operational side of things. We're seeing a lot of AI used for that in hospitals as well.
McGrevy: Can you walk us through the bias concerns with AI use in hospitals?
Levi: So, these really date back to a famous paper from 2019 published by Ziad Obermeyer and some colleagues, and it found that there was this AI algorithm that was being used by a large health insurer to predict people's health needs.
This algorithm was biased. It had used past health costs as a predictor for people's future health needs. The problem with that is black patients often spend less on health care because they have less access. And because that data that was used to train the algorithm was biased, the algorithm itself ended up being biased against black patients. Those patients needed to be much sicker than white patients to get the same recommended care.
And we know from interviews that Paige Nong, the researcher, has done that a lot of hospitals are not scrutinizing their AI tools to make sure that these biases that we all know exist in our health care system aren't being perpetuated by these AI tools.
And so now, for the first time, we have some national data showing that this isn't just a kind of isolated concern, but this is actually a widespread issue across the country.
McGrevy: Okay, so if I happen to be a patient at a hospital that's using AI, should I be concerned about a misdiagnosis? Am I going to be seeing an AI, quote, unquote provider, instead of like an actual human doctor. What are the risks here for patients?
Levi: So, even though AI use is pretty widespread, it still may not actually be totally clear or obvious to a lot of patients that AI is being used in their care. It's unlikely to be an AI provider, a robot coming in to take your blood pressure or anything like that.
And Paige Nong, the researcher from University of Minnesota, says that — based on her research — she's also not that worried about clinical errors, you know, the misdiagnosis types of things. That’s because hospitals have a lot of safeguards and protocols in place to avoid those, even before AI.
She's more worried about patients being harmed by those non-clinical use I mentioned: the scheduling and the billing.
“[Patients’] appointments might be getting cut short, but they don't know why, or they might be having trouble with a bill and they can't get somebody on the phone, they might not know why,” Nong said. “It is possible that these kinds of predictive AI tools are shaping their experience of the healthcare system outside of the direct clinical care or outside of their direct relationship with their doctor.”
McGrevy: And so, are there enough guard rails to make sure that AI is used in a way that doesn't negatively impact some of these patients?
Levi: I mean, that's the big question, Lizzy.
Some of the big hospitals — like Mayo, Duke, University of California San Francisco — have pretty rigorous processes for vetting and testing their AI tools, and there are large industry groups that are trying to put together best practices that hospitals can follow.
But there's a lot of concern about whether, particularly smaller hospitals and clinics that serve poor and more marginalized communities, will have the resources to make sure that the AI they're using is safe and effective.
The Biden administration was very involved in pushing for more transparency and regulation in this space, but many people expect the Trump administration to take a lighter touch. Some folks think that will promote more innovation, but others are really worried that it could lead to patients being exposed to more bad and harmful AI.
McGrevy: Ryan, thank you so much for this information. To learn more you can listen to Tradeoffs episode at tradeoffs.org.
The Checkup by Side Effects Public Media is a regular audio segment on WFYI's daily podcast, WFYI News Now.
Side Effects Public Media is a health reporting collaboration based at WFYI in Indianapolis. We partner with NPR stations across the Midwest and surrounding areas — including KBIA and KCUR in Missouri, Iowa Public Radio, Ideastream in Ohio and WFPL in Kentucky.