- Browse by Author
Browsing by Author "Burns, John L."
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item AI recognition of patient race in medical imaging: a modelling study(Elsevier, 2022-06) Gichoya, Judy Wawira; Banerjee, Imon; Bhimireddy, Ananth Reddy; Burns, John L.; Celi, Leo Anthony; Chen, Li-Ching; Correa, Ramon; Dullerud, Natalie; Ghassemi, Marzyeh; Huang, Shih-Cheng; Kuo, Po-Chih; Lungren, Matthew P.; Palmer, Lyle J.; Price, Brandon J.; Purkayastha, Saptarshi; Pyrros, Ayis T.; Oakden-Rayner, Lauren; Okechukwu, Chima; Seyyed-Kalantari, Laleh; Trivedi, Hari; Wang, Ryan; Zaiman, Zachary; Zhang, Haoran; BioHealth Informatics, School of Informatics and ComputingBackground Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. Methods Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race. Findings In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study. Interpretation The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging. Funding National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology.Item Enhancing cancer prevention and survivorship care with a videoconferencing model for continuing education: a mixed-methods study to identify barriers and incentives to participation(Oxford University Press, 2022-02-12) Milgrom, Zheng Z.; Severance, Tyler S.; Scanlon, Caitlin M.; Carson, Anyé T.; Janota, Andrea D.; Burns, John L.; Vik, Terry A.; Duwve, Joan M.; Dixon, Brian E.; Mendonca, Eneida A.; Epidemiology, School of Public HealthObjective: To enhance cancer prevention and survivorship care by local health care providers, a school of public health introduced an innovative telelearning continuing education program using the Extension for Community Healthcare Outcomes (ECHO) model. In ECHO's hub and spoke structure, synchronous videoconferencing connects frontline health professionals at various locations ("spokes") with experts at the facilitation center ("hub"). Sessions include experts' didactic presentations and case discussions led by spoke site participants. The objective of this study was to gain a better understanding of the reasons individuals choose or decline to participate in the Cancer ECHO program and to identify incentives and barriers to doing so. Materials and methods: Study participants were recruited from the hub team, spoke site participants, and providers who attended another ECHO program but not this one. Participants chose to take a survey or be interviewed. The Consolidated Framework for Implementation Research guided qualitative data coding and analysis. Results: We conducted 22 semistructured interviews and collected 30 surveys. Incentives identified included the program's high-quality design, supportive learning climate, and access to information. Barriers included a lack of external incentives to participate and limited time available. Participants wanted more adaptability in program timing to fit providers' busy schedules. Conclusion: Although the merits of the Cancer ECHO program were widely acknowledged, adaptations to facilitate participation and emphasize the program's benefits may help overcome barriers to attending. As the number of telelearning programs grows, the results of this study point to ways to expand participation and spread health benefits more widely.Item Evaluation of federated learning variations for COVID-19 diagnosis using chest radiographs from 42 US and European hospitals(Oxford University Press, 2022) Peng, Le; Luo, Gaoxiang; Walker, Andrew; Zaiman, Zachary; Jones, Emma K.; Gupta, Hemant; Kersten, Kristopher; Burns, John L.; Harle, Christopher A.; Magoc, Tanja; Shickel, Benjamin; Steenburg, Scott D.; Loftus, Tyler; Melton, Genevieve B.; Wawira Gichoya, Judy; Sun, Ju; Tignanelli, Christopher J.; Radiology and Imaging Sciences, School of MedicineObjective: Federated learning (FL) allows multiple distributed data holders to collaboratively learn a shared model without data sharing. However, individual health system data are heterogeneous. "Personalized" FL variations have been developed to counter data heterogeneity, but few have been evaluated using real-world healthcare data. The purpose of this study is to investigate the performance of a single-site versus a 3-client federated model using a previously described Coronavirus Disease 19 (COVID-19) diagnostic model. Additionally, to investigate the effect of system heterogeneity, we evaluate the performance of 4 FL variations. Materials and methods: We leverage a FL healthcare collaborative including data from 5 international healthcare systems (US and Europe) encompassing 42 hospitals. We implemented a COVID-19 computer vision diagnosis system using the Federated Averaging (FedAvg) algorithm implemented on Clara Train SDK 4.0. To study the effect of data heterogeneity, training data was pooled from 3 systems locally and federation was simulated. We compared a centralized/pooled model, versus FedAvg, and 3 personalized FL variations (FedProx, FedBN, and FedAMP). Results: We observed comparable model performance with respect to internal validation (local model: AUROC 0.94 vs FedAvg: 0.95, P = .5) and improved model generalizability with the FedAvg model (P < .05). When investigating the effects of model heterogeneity, we observed poor performance with FedAvg on internal validation as compared to personalized FL algorithms. FedAvg did have improved generalizability compared to personalized FL algorithms. On average, FedBN had the best rank performance on internal and external validation. Conclusion: FedAvg can significantly improve the generalization of the model compared to other personalization FL algorithms; however, at the cost of poor internal validity. Personalized FL may offer an opportunity to develop both internal and externally validated algorithms.Item Just in Time Radiology Decision Support Using Real-time Data Feeds(SpringerLink, 2020-02) Burns, John L.; Hasting, Dan; Gichoya, Judy W.; McKibben, Ben, III.; Shea, Lindsey; Frank, Mark; Radiology and Imaging Sciences, School of MedicineReady access to relevant real-time information in medical imaging offers several potential benefits. Knowing both when important information will be available and that important information is available can facilitate optimization of workflow and management of time. Unexpected findings, as well as deficiencies in reporting and documentation, can be immediately managed. Herein, we present our experience developing and implementing a real-time web-centric dashboard system for radiologists, clinicians, and support staff. The dashboards are driven by multi-sourced HL7 message streams that are monitored, analyzed, aggregated, and transformed into multiple real-time displays to improve operations within our department. We call this framework Pipeline. Ruby on Rails, JavaScript, HTML, and SQL serve as the foundations of the Pipeline application. HL7 messages are processed in real-time by a Mirth interface engine which posts exam data into SQL. Users utilize web browsers to visit the Ruby on Rails-based dashboards on any device connected to our hospital network. The dashboards will automatically refresh every 30 seconds using JavaScript. The Pipeline application has been well received by clinicians and radiologists.Item Performance of a Chest Radiograph AI Diagnostic Tool for COVID-19: A Prospective Observational Study(Radiological Society of North America, 2022-06-01) Sun, Ju; Peng, Le; Li, Taihui; Adila, Dyah; Zaiman, Zach; Melton-Meaux, Genevieve B.; Ingraham, Nicholas E.; Murray, Eric; Boley, Daniel; Switzer, Sean; Burns, John L.; Huang, Kun; Allen, Tadashi; Steenburg, Scott D.; Wawira Gichoya, Judy; Kummerfeld, Erich; Tignanelli, Christopher J.; Radiology and Imaging Sciences, School of MedicinePurpose: To conduct a prospective observational study across 12 U.S. hospitals to evaluate real-time performance of an interpretable artificial intelligence (AI) model to detect COVID-19 on chest radiographs. Materials and methods: A total of 95 363 chest radiographs were included in model training, external validation, and real-time validation. The model was deployed as a clinical decision support system, and performance was prospectively evaluated. There were 5335 total real-time predictions and a COVID-19 prevalence of 4.8% (258 of 5335). Model performance was assessed with use of receiver operating characteristic analysis, precision-recall curves, and F1 score. Logistic regression was used to evaluate the association of race and sex with AI model diagnostic accuracy. To compare model accuracy with the performance of board-certified radiologists, a third dataset of 1638 images was read independently by two radiologists. Results: Participants positive for COVID-19 had higher COVID-19 diagnostic scores than participants negative for COVID-19 (median, 0.1 [IQR, 0.0-0.8] vs 0.0 [IQR, 0.0-0.1], respectively; P < .001). Real-time model performance was unchanged over 19 weeks of implementation (area under the receiver operating characteristic curve, 0.70; 95% CI: 0.66, 0.73). Model sensitivity was higher in men than women (P = .01), whereas model specificity was higher in women (P = .001). Sensitivity was higher for Asian (P = .002) and Black (P = .046) participants compared with White participants. The COVID-19 AI diagnostic system had worse accuracy (63.5% correct) compared with radiologist predictions (radiologist 1 = 67.8% correct, radiologist 2 = 68.6% correct; McNemar P < .001 for both). Conclusion: AI-based tools have not yet reached full diagnostic potential for COVID-19 and underperform compared with radiologist prediction.Item Reading Race: AI Recognises Patient's Racial Identity In Medical Images(arXiv, 2021) Banerjee, Imon; Bhimireddy, Ananth Reddy; Burns, John L.; Celi, Leo Anthony; Chen, Li-Ching; Correa, Ramon; Dullerud, Natalie; Ghassemi, Marzyeh; Huang, Shih-Cheng; Kuo, Po-Chih; Lungren, Matthew P.; Palmer, Lyle; Price, Brandon J.; Purkayastha, Saptarshi; Pyrros, Ayis; Oakden-Rayner, Luke; Okechukwu, Chima; Seyyed-Kalantari, Laleh; Trivedi, Hari; Wang, Ryan; Zaiman, Zachary; Zhang, Haoran; Gichoya, Judy W.; BioHealth Informatics, School of Informatics and ComputingBackground: In medical imaging, prior studies have demonstrated disparate AI performance by race, yet there is no known correlation for race on medical imaging that would be obvious to the human expert interpreting the images. Methods: Using private and public datasets we evaluate: A) performance quantification of deep learning models to detect race from medical images, including the ability of these models to generalize to external environments and across multiple imaging modalities, B) assessment of possible confounding anatomic and phenotype population features, such as disease distribution and body habitus as predictors of race, and C) investigation into the underlying mechanism by which AI models can recognize race. Findings: Standard deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities. Our findings hold under external validation conditions, as well as when models are optimized to perform clinically motivated tasks. We demonstrate this detection is not due to trivial proxies or imaging-related surrogate covariates for race, such as underlying disease distribution. Finally, we show that performance persists over all anatomical regions and frequency spectrum of the images suggesting that mitigation efforts will be challenging and demand further study. Interpretation: We emphasize that model ability to predict self-reported race is itself not the issue of importance. However, our findings that AI can trivially predict self-reported race -- even from corrupted, cropped, and noised medical images -- in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.Item “Shortcuts” Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation(Elsevier, 2023) Banerjee, Imon; Bhattacharjee, Kamanasish; Burns, John L.; Trivedi, Hari; Purkayastha, Saptarshi; Seyyed-Kalantari, Laleh; Patel, Bhavik N.; Rakesh, Shiradkar; Judy, Gichoya; Radiology and Imaging Sciences, School of MedicineDespite the expert-level performance of artificial intelligence (AI) models for various medical imaging tasks, real-world performance failures with disparate outputs for various subgroups limit the usefulness of AI in improving patients' lives. Many definitions of fairness have been proposed, with discussions of various tensions that arise in the choice of an appropriate metric to use to evaluate bias; for example, should one aim for individual or group fairness? One central observation is that AI models apply "shortcut learning" whereby spurious features (such as chest tubes and portable radiographic markers on intensive care unit chest radiography) on medical images are used for prediction instead of identifying true pathology. Moreover, AI has been shown to have a remarkable ability to detect protected attributes of age, sex, and race, while the same models demonstrate bias against historically underserved subgroups of age, sex, and race in disease diagnosis. Therefore, an AI model may take shortcut predictions from these correlations and subsequently generate an outcome that is biased toward certain subgroups even when protected attributes are not explicitly used as inputs into the model. As a result, these subgroups became nonprivileged subgroups. In this review, the authors discuss the various types of bias from shortcut learning that may occur at different phases of AI model development, including data bias, modeling bias, and inference bias. The authors thereafter summarize various tool kits that can be used to evaluate and mitigate bias and note that these have largely been applied to nonmedical domains and require more evaluation for medical AI. The authors then summarize current techniques for mitigating bias from preprocessing (data-centric solutions) and during model development (computational solutions) and postprocessing (recalibration of learning). Ongoing legal changes where the use of a biased model will be penalized highlight the necessity of understanding, detecting, and mitigating biases from shortcut learning and will require diverse research teams looking at the whole AI pipeline.