- Browse by Author
Browsing by Author "Bhimireddy, Ananth Reddy"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item AI recognition of patient race in medical imaging: a modelling study(Elsevier, 2022-06) Gichoya, Judy Wawira; Banerjee, Imon; Bhimireddy, Ananth Reddy; Burns, John L.; Celi, Leo Anthony; Chen, Li-Ching; Correa, Ramon; Dullerud, Natalie; Ghassemi, Marzyeh; Huang, Shih-Cheng; Kuo, Po-Chih; Lungren, Matthew P.; Palmer, Lyle J.; Price, Brandon J.; Purkayastha, Saptarshi; Pyrros, Ayis T.; Oakden-Rayner, Lauren; Okechukwu, Chima; Seyyed-Kalantari, Laleh; Trivedi, Hari; Wang, Ryan; Zaiman, Zachary; Zhang, Haoran; BioHealth Informatics, School of Informatics and ComputingBackground Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. Methods Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race. Findings In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study. Interpretation The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging. Funding National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology.Item Few-Shot Transfer Learning to improve Chest X-Ray pathology detection using limited triplets(arXiv, 2022-04) Bhimireddy, Ananth Reddy; Burns, John Lee; Purkayastha, Saptarshi; Gichoya, Judy Wawira; BioHealth Informatics, School of Informatics and ComputingDeep learning approaches applied to medical imaging have reached near-human or better-than-human performance on many diagnostic tasks. For instance, the CheXpert competition on detecting pathologies in chest x-rays has shown excellent multi-class classification performance. However, training and validating deep learning models require extensive collections of images and still produce false inferences, as identified by a human-in-the-loop. In this paper, we introduce a practical approach to improve the predictions of a pre-trained model through Few-Shot Learning (FSL). After training and validating a model, a small number of false inference images are collected to retrain the model using \textbf{\textit{Image Triplets}} - a false positive or false negative, a true positive, and a true negative. The retrained FSL model produces considerable gains in performance with only a few epochs and few images. In addition, FSL opens rapid retraining opportunities for human-in-the-loop systems, where a radiologist can relabel false inferences, and the model can be quickly retrained. We compare our retrained model performance with existing FSL approaches in medical imaging that train and evaluate models at once.Item Multi-label natural language processing to identify diagnosis and procedure codes from MIMIC-III inpatient notes(arXiv, 2020) Bhavani Singh, A. K.; Guntu, Mounika; Bhimireddy, Ananth Reddy; Gichoya, Judy W.; Purkayastha, Saptarshi; BioHealth Informatics, School of Informatics and ComputingIn the United States, 25% or greater than 200 billion dollars of hospital spending accounts for administrative costs that involve services for medical coding and billing. With the increasing number of patient records, manual assignment of the codes performed is overwhelming, time-consuming and error-prone, causing billing errors. Natural language processing can automate the extraction of codes/labels from unstructured clinical notes, which can aid human coders to save time, increase productivity, and verify medical coding errors. Our objective is to identify appropriate diagnosis and procedure codes from clinical notes by performing multi-label classification. We used de-identified data of critical care patients from the MIMIC-III database and subset the data to select the ten (top-10) and fifty (top-50) most common diagnoses and procedures, which covers 47.45% and 74.12% of all admissions respectively. We implemented state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) to fine-tune the language model on 80% of the data and validated on the remaining 20%. The model achieved an overall accuracy of 87.08%, an F1 score of 85.82%, and an AUC of 91.76% for top-10 codes. For the top-50 codes, our model achieved an overall accuracy of 93.76%, an F1 score of 92.24%, and AUC of 91%. When compared to previously published research, our model outperforms in predicting codes from the clinical text. We discuss approaches to generalize the knowledge discovery process of our MIMIC-BERT to other clinical notes. This can help human coders to save time, prevent backlogs, and additional costs due to coding errors.Item Reading Race: AI Recognises Patient's Racial Identity In Medical Images(arXiv, 2021) Banerjee, Imon; Bhimireddy, Ananth Reddy; Burns, John L.; Celi, Leo Anthony; Chen, Li-Ching; Correa, Ramon; Dullerud, Natalie; Ghassemi, Marzyeh; Huang, Shih-Cheng; Kuo, Po-Chih; Lungren, Matthew P.; Palmer, Lyle; Price, Brandon J.; Purkayastha, Saptarshi; Pyrros, Ayis; Oakden-Rayner, Luke; Okechukwu, Chima; Seyyed-Kalantari, Laleh; Trivedi, Hari; Wang, Ryan; Zaiman, Zachary; Zhang, Haoran; Gichoya, Judy W.; BioHealth Informatics, School of Informatics and ComputingBackground: In medical imaging, prior studies have demonstrated disparate AI performance by race, yet there is no known correlation for race on medical imaging that would be obvious to the human expert interpreting the images. Methods: Using private and public datasets we evaluate: A) performance quantification of deep learning models to detect race from medical images, including the ability of these models to generalize to external environments and across multiple imaging modalities, B) assessment of possible confounding anatomic and phenotype population features, such as disease distribution and body habitus as predictors of race, and C) investigation into the underlying mechanism by which AI models can recognize race. Findings: Standard deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities. Our findings hold under external validation conditions, as well as when models are optimized to perform clinically motivated tasks. We demonstrate this detection is not due to trivial proxies or imaging-related surrogate covariates for race, such as underlying disease distribution. Finally, we show that performance persists over all anatomical regions and frequency spectrum of the images suggesting that mitigation efforts will be challenging and demand further study. Interpretation: We emphasize that model ability to predict self-reported race is itself not the issue of importance. However, our findings that AI can trivially predict self-reported race -- even from corrupted, cropped, and noised medical images -- in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.