I returned from last month’s 2023 WEDI National Conference with new perspectives on many fronts. This was the first conference hosted by WEDI that I’ve been to, and it was a well-oiled machine. All of the sessions were very smooth. The atmosphere was intimate and fostered communication, the food was good, and the sessions were quite relevant. My hat is off to WEDI Director of Education & Events Michael McNutt and the WEDI staff for organizing the event and staying on top of all the presenters. Everything was incredibly well done!
I also gained perspective on a few themes that appeared throughout the conference. One conference theme stood out in particular: using AI in Healthcare.
How AI is Making Positive Impacts on Healthcare Today
The AI conversations were interesting and numerous. One presenter mentioned the use of AI in a clinical setting. In his example, AI would notify the physician that the patient has a high risk for a given illness that is otherwise difficult to detect. The notification that the AI generates helps the physician pay closer attention to potential indicators that they may not otherwise notice. What’s more interesting is there are not many effective tests that the physician could run to determine if the patient did in fact have such an illness. The AI arrives at its decision to notify the physician by processing statistical data about a population of patients. Based on the AI’s analysis, it would render an alert when someone has a high likelihood of having this illness. The value of the indicator is immense because it allows the physician to become more aware of potential illnesses that frequently go undetected by solving the problem of detection in a different way.
Involving AI can also help researchers develop new treatments. Researchers may be given a set of individuals who have a high statistical likelihood of having a particular illness. This group of individuals can be identified even though the researcher may not understand why. By identifying those individuals, researchers may be able to establish patterns among those patients and use those patterns to back into a more identifiable cause. Identifying the cause is a critical step in identifying a new treatment. This technology has not been so readily available at the fingertips of researchers and physicians at any point in the past. It’s an exciting time to be in the industry.
How AI May Harm More Than Help
Perhaps even more interesting was the contrast with the potential harm that AI poses in the healthcare industry. The danger that most struck a chord with me was in equity. Former National
Coordinator Dr. Vindell Washington gave an informative presentation concerning AI and equity on behalf of Verily. One example he gave made me think about the challenge of equity differently. I couldn’t possibly recite what was said or even paraphrase it as eloquently as he did. With that said, the essence of his point was the following:
Consider two individuals who come from different cultures and values. You would like to motivate both individuals to improve their health outcomes through lifestyle changes using an AI. One of the individuals is motivated by authority. As a result, the AI might show pictures of doctors on a stage, figures of people in suits and ties, and other images conveying authority. The images are also accompanied by text in an attempt to motivate the patient by coming from a place of authority. Meanwhile, the other individual is not nearly as motivated by authority as they are by community. As a result, the individual is shown pictures of people sharing, people at church, and a family at a dinner table. The accompanying text speaks to their desire to participate with their peers in a healthy community.
These are two very different sets of information that the AI must generate based on who it’s communicating with to fulfill its purpose equitably. Worse yet, this is one very small slice of perspective that needs to be considered when developing equitable AI software. Especially when that AI software will be used to engage an entire population across many cultures, values, and norms. This made a big impact on me. Not so much because this specific problem is hard, which it really is. Rather, because I realized it is just the very tip of the iceberg of concerns that need to be considered if AI software is to become truly equitable while also fulfilling its potential.
The Pursuit of Equity in AI is Currently Inequitable
That’s when it hit me. The unfortunate reality is that equity is on a continuum. Developers of AI need to consider all facets of what makes an AI equitable. The more businesses invest in research concerning equity measurement and tuning, the more equitable their AI will become. However, therein lies the problem. A company like Verily, an offshoot of Google, has the resources to invest in this important capability. They also have the capacity to hire expert staff like Dr. Washington to lead the charge. Meanwhile, a startup that is using AI to do new innovative things may not have the same resources to invest in equity research. Nevertheless, they do have the means to roll out a new AI-enabled tool. As a result, the startup may not produce an AI that has been tuned to consider all of the equitable perspectives that a larger company’s AI has. The pursuit of equity in AI is inherently inequitable due to the available resources that a given company can invest.
Standards, Benchmarks, and Open Source Tools to the Rescue!
The good news is, I don’t think it needs to be that way. We can look at equity as a competency that industries need to build in order to be successful. It can be viewed just like standards and open source libraries that are developed to build competency in health IT and interoperability. The same can be done for building equitable solutions. We are so early in building this competency as an industry that we don’t even have a benchmark on which to measure how equitable a particular AI is. Measurement is the first step. Once a bar has been established, the components that are necessary to meet the bar can follow. Contributions from organizations around the industry can be made just like we see in interoperability. Such a collaboration would make the building of equitable AI products much more reachable for small start-ups and large businesses alike. Sort of an HL7 of equitable source code, test kits, data sets, standards, and implementation guides. It’s a whole new genre that lies ahead of us.
This topic is a bit outside of my swim lane. Yet, it made a big impact on me and is just one example of the thought-provoking topics and speakers that WEDI curated. I am grateful to have been part of the annual conference and I look forward to continuing the conversation around AI.
This post was contributed by Kyle Neuman, Director of Trust Framework Development.