Dr. Philip Payne is the chief health AI officer at the Center for Health AI, a collaboration between WashU Medicine and BJC Health System in St. Louis.
He is a co-author of “An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action," a unifying code of conduct that he said providers should consider when employing AI in healthcare processes and decisions.
For the month of October, we’re focusing on the intersection of healthcare and artificial intelligence.
Dr. Philip Payne: So, I think the key is the term human-centered, right? When we're using AI in healthcare, what I always remind people is success looks like — we have restored the humanism in care.
Doctors and patients are having a real conversation. People have the information they need to make informed decisions. That we're focusing on, “How do we improve quality and safety and value of care.” These are all sort of factors that have a real impact on human beings.
Now, behind the scenes, we have to be very thoughtful. How do we protect people's privacy? How do we make sure that the people using the tools understand how the tools work, so that they can make a good choice around which output from the AI they choose to use in their clinical decision making and which they don't.
One of the things that we did, very purposely, is we didn't say this is a checklist or a set of rules or even guidelines. We said that this was a code of conduct, and that was really important because we realized that this space was changing so fast that what we really needed to do was encode a set of expected behaviors.
When you're selecting or evaluating or using AI at the point of care, you need to consider — what is the ethical implication of using or not using the AI?
How do we make sure that the AI is equitable, right? We know that if we use biased data, we will get biased results from an AI.
So, the first step is making sure we have the right data and we can measure sources of bias because you can't fix that problem if you can't measure it.
And so, those are all examples of behaviors, right? Knowing that that's an important question.
And so, I often tell people I look at that set of behaviors, and I think the easiest way to imagine operationalizing it is — what are the questions that we should ask and answer every single time we deploy one of these technologies, right?
And you could imagine for equity, right? You know, do we have the right data? Is it representative of the patients that we serve? Are we measuring sources of bias? Is cost or access to the AI differential based on geography or demographics?
I believe that rural health systems may see some of the greatest benefits from the deployment of AI because it will provide, sort of, supportive services to providers who don't have all the resources and the luxury of, you know, large numbers of specialists like we have in a big academic center in an urban setting, right?
And that means people can get their care closer to home and at the same quality as if they went to one of those urban centers, and that's where AI can help humans. But we need to think about that.
So, I think everyone should ask the question — not, why are you using AI? But why not? Because then you start saying, “Where is the evidence that tells us that AI, working in conjunction with humans, is better.”
And so, why not use AI to make sure that we've looked at that chest X-ray and determined if there is any findings that are important to my long-term health as a result of having that scan done?
You know, why not use AI to make sure that the documentation that's being produced during my clinical encounter is complete and helps the next provider that sees me better understand my health status and history.
And I think when we start asking, “Why not,” instead of “Why” that's a different framing that helps us think about where and how AI can have a positive impact.
 
 
 
     
     
