“Shortcuts” Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation

dc.contributor.authorBanerjee, Imon
dc.contributor.authorBhattacharjee, Kamanasish
dc.contributor.authorBurns, John L.
dc.contributor.authorTrivedi, Hari
dc.contributor.authorPurkayastha, Saptarshi
dc.contributor.authorSeyyed-Kalantari, Laleh
dc.contributor.authorPatel, Bhavik N.
dc.contributor.authorRakesh, Shiradkar
dc.contributor.authorJudy, Gichoya
dc.contributor.departmentRadiology and Imaging Sciences, School of Medicine
dc.date.accessioned2024-09-03T08:15:44Z
dc.date.available2024-09-03T08:15:44Z
dc.date.issued2023
dc.description.abstractDespite the expert-level performance of artificial intelligence (AI) models for various medical imaging tasks, real-world performance failures with disparate outputs for various subgroups limit the usefulness of AI in improving patients' lives. Many definitions of fairness have been proposed, with discussions of various tensions that arise in the choice of an appropriate metric to use to evaluate bias; for example, should one aim for individual or group fairness? One central observation is that AI models apply "shortcut learning" whereby spurious features (such as chest tubes and portable radiographic markers on intensive care unit chest radiography) on medical images are used for prediction instead of identifying true pathology. Moreover, AI has been shown to have a remarkable ability to detect protected attributes of age, sex, and race, while the same models demonstrate bias against historically underserved subgroups of age, sex, and race in disease diagnosis. Therefore, an AI model may take shortcut predictions from these correlations and subsequently generate an outcome that is biased toward certain subgroups even when protected attributes are not explicitly used as inputs into the model. As a result, these subgroups became nonprivileged subgroups. In this review, the authors discuss the various types of bias from shortcut learning that may occur at different phases of AI model development, including data bias, modeling bias, and inference bias. The authors thereafter summarize various tool kits that can be used to evaluate and mitigate bias and note that these have largely been applied to nonmedical domains and require more evaluation for medical AI. The authors then summarize current techniques for mitigating bias from preprocessing (data-centric solutions) and during model development (computational solutions) and postprocessing (recalibration of learning). Ongoing legal changes where the use of a biased model will be penalized highlight the necessity of understanding, detecting, and mitigating biases from shortcut learning and will require diverse research teams looking at the whole AI pipeline.
dc.eprint.versionAuthor's manuscript
dc.identifier.citationBanerjee I, Bhattacharjee K, Burns JL, et al. "Shortcuts" Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation. J Am Coll Radiol. 2023;20(9):842-851. doi:10.1016/j.jacr.2023.06.025
dc.identifier.urihttps://hdl.handle.net/1805/43059
dc.language.isoen_US
dc.publisherElsevier
dc.relation.isversionof10.1016/j.jacr.2023.06.025
dc.relation.journalJournal of the American College of Radiology
dc.rightsPublisher Policy
dc.sourcePMC
dc.subjectArtificial intelligence
dc.subjectMachine learning
dc.subjectRadiography
dc.title“Shortcuts” Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation
dc.typeArticle
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Banerjee2023Shortcuts-AAM.pdf
Size:
320.78 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.04 KB
Format:
Item-specific license agreed upon to submission
Description: