This study builds upon work in algorithmic bias, and bias in healthcare. The use of AI-based diagnostic tools has been motivated by a shortage of radiologists globally, and research which shows that AI algorithms can match specialist performance (particularly in medical imaging). Yet, the topic of AI-driven underdiagnosis has been relatively unexplored.
What was observed is that female patients, patients under 20 years old, Black patients, Hispanic patients and patients of lower socioeconomic status (with Medicaid insurance as a proxy), receive higher rates of algorithmic underdiagnosis than other groups. These effects persist for intersectional subgroups – eg, Black female patients. In other words, these groups are at a higher risk of being falsely flagged through AI-based diagnostic tools as healthy, and of receiving no clinical treatment.
These findings have demonstrated a concrete way in which deployed algorithms can escalate existing systemic health inequities, particularly if there is no robust audit of performance disparities across subpopulations. At the pace in which algorithms are moving from the lab to real-world deployment, regulators and policymakers have to genuinely consider the ethical concerns with regards to access to medical treatment for racialised, under-served subpopulations, and the effective and ethical deployment of these models.
See: Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations at Nature.