Artificial intelligence (AI) has the potential to revolutionize healthcare delivery, with applications in decision support, patient care, and disease management. AI helps clinicians work smarter while improving patient outcomes, from the machine learning algorithms that read patient scans more accurately, to natural language processing facilitating searches through unstructured data in electronic health records. However, AI can suffer from bias, which has striking implications for healthcare. The term āalgorithmic biasā speaks to this problem.
Algorithmic bias is not a new problem and is not specific to AI. In fact, an algorithm is merely a series of stepsāa recipe and an exercise plan are as much of an algorithm as a complex model. At the core of any health system challenge, including algorithmic bias, lies a question of values: what health outcomes are we trying to achieve, and for whom? . The issue of bias being exhibited, perpetuated, or even amplified by AI algorithms is an increasing concern within healthcare. Bias is usually defined as a difference in performance between subgroups for a predictive task. For example, an AI algorithm used for predicting future risk of breast cancer may suffer from a performance gap wherein black patients are more likely to be assigned as ālow riskā incorrectly.
The use of AI and algorithmic decision-making systems in medicine is increasing, even though current regulation may be insufficient to detect harmful racial biases in these tools. Details about the toolsā development are largely unknown to clinicians and the public ā a lack of transparency that threatens to automate and worsen racism in the healthcare system. The FDA issued guidance significantly broadening the scope of the tools it plans to regulate, emphasizing that more must be done to combat bias and promote equity amid the growing number and increasing use of AI and algorithmic tools.
Bias particularly impacts disadvantaged populations, which can be subject to algorithmic predictions that are less accurate or underestimate the need for care. Thus, strategies for detecting and mitigating bias are pivotal for creating AI technology that is generalizable and fair. A recent study developed a new strategy to mitigate bias in surgical AI systems. Bias in medical AI algorithms can only cause inefficiencies in an industry such as manufacturing, but it can have dangerous consequences in the healthcare sector. For example, a biased result generated from an AI-enabled computer vision system for radiology can lead to an incorrect diagnosis, posing a serious risk for patients.
Implicit and contextual biases are causing incorrect diagnoses and care disparities, leading many healthcare organizations to look for solutions. Patients can typically tell if a provider has an implicit bias based on the providerās body language or word choice. If a patient picks up on the implicit bias, itās going to negatively impact their relationship with the healthcare provider. Once that happens, theyāll either search out a new provider or disengage from treatment altogether, keeping them from getting the care they need. Real examples of AI bias in healthcare often reflect the bias of the healthcare provider because the AI model is learning based on the diagnoses the provider gives. Therefore, if bias plays a role in a healthcare professionalās decision, it will be reflected in the AI model.
In conclusion, AI bias in healthcare is a significant issue that needs to be addressed. Strategies for detecting and mitigating bias are pivotal for creating AI technology that is generalizable and fair. The FDA has issued guidance significantly broadening the scope of the tools it plans to regulate, emphasizing that more must be done to combat bias and promote equity amid the growing number and increasing use of AI and algorithmic tools. Patients can typically tell if a provider has an implicit bias based on the providerās body language or word choice. Therefore, it is important to address implicit and contextual biases that are causing incorrect diagnoses and care disparities.