“Shortcuts” Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation
dc.contributor.author | Banerjee, Imon | |
dc.contributor.author | Bhattacharjee, Kamanasish | |
dc.contributor.author | Burns, John L. | |
dc.contributor.author | Trivedi, Hari | |
dc.contributor.author | Purkayastha, Saptarshi | |
dc.contributor.author | Seyyed-Kalantari, Laleh | |
dc.contributor.author | Patel, Bhavik N. | |
dc.contributor.author | Rakesh, Shiradkar | |
dc.contributor.author | Judy, Gichoya | |
dc.contributor.department | Radiology and Imaging Sciences, School of Medicine | |
dc.date.accessioned | 2024-09-03T08:15:44Z | |
dc.date.available | 2024-09-03T08:15:44Z | |
dc.date.issued | 2023 | |
dc.description.abstract | Despite the expert-level performance of artificial intelligence (AI) models for various medical imaging tasks, real-world performance failures with disparate outputs for various subgroups limit the usefulness of AI in improving patients' lives. Many definitions of fairness have been proposed, with discussions of various tensions that arise in the choice of an appropriate metric to use to evaluate bias; for example, should one aim for individual or group fairness? One central observation is that AI models apply "shortcut learning" whereby spurious features (such as chest tubes and portable radiographic markers on intensive care unit chest radiography) on medical images are used for prediction instead of identifying true pathology. Moreover, AI has been shown to have a remarkable ability to detect protected attributes of age, sex, and race, while the same models demonstrate bias against historically underserved subgroups of age, sex, and race in disease diagnosis. Therefore, an AI model may take shortcut predictions from these correlations and subsequently generate an outcome that is biased toward certain subgroups even when protected attributes are not explicitly used as inputs into the model. As a result, these subgroups became nonprivileged subgroups. In this review, the authors discuss the various types of bias from shortcut learning that may occur at different phases of AI model development, including data bias, modeling bias, and inference bias. The authors thereafter summarize various tool kits that can be used to evaluate and mitigate bias and note that these have largely been applied to nonmedical domains and require more evaluation for medical AI. The authors then summarize current techniques for mitigating bias from preprocessing (data-centric solutions) and during model development (computational solutions) and postprocessing (recalibration of learning). Ongoing legal changes where the use of a biased model will be penalized highlight the necessity of understanding, detecting, and mitigating biases from shortcut learning and will require diverse research teams looking at the whole AI pipeline. | |
dc.eprint.version | Author's manuscript | |
dc.identifier.citation | Banerjee I, Bhattacharjee K, Burns JL, et al. "Shortcuts" Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation. J Am Coll Radiol. 2023;20(9):842-851. doi:10.1016/j.jacr.2023.06.025 | |
dc.identifier.uri | https://hdl.handle.net/1805/43059 | |
dc.language.iso | en_US | |
dc.publisher | Elsevier | |
dc.relation.isversionof | 10.1016/j.jacr.2023.06.025 | |
dc.relation.journal | Journal of the American College of Radiology | |
dc.rights | Publisher Policy | |
dc.source | PMC | |
dc.subject | Artificial intelligence | |
dc.subject | Machine learning | |
dc.subject | Radiography | |
dc.title | “Shortcuts” Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation | |
dc.type | Article |