- Browse by Subject
Saptarshi Purkayastha
Permanent URI for this collection
Risks and Opportunities of AI Recognition of Patient Race in Medical Imaging
Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. His recent work published in Lancet Digital Health demonstrates that deep learning models have extremely high accuracy at identifying self-reported race from medical images such as X-rays, MRIs and CTs. This ability raises serious concerns among some researchers. Such software might group patients, or influence their care, by factoring in race. These AI models work very well on poor quality, distorted and even images where many parts of the image were deliberately cut out. These types of categorizations could lead to inequality in providing health care and making recommendations, and human decision makers might not understand how and why AI models are making the recommendations. Engineers, clinical researchers and informaticians need to get together to identify how AI models are able to have these superhuman capabilities.
Professor Purkayastha's translation of research into potential ways to identify and mitigate risks of deploying AI models in clinical practice to avoid racial issues in healthcare treatment is another example of how IUPUI's faculty members are TRANSLATING their RESEARCH INTO PRACTICE.
Browse
Browsing Saptarshi Purkayastha by Subject "artificial intelligence"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item AI recognition of patient race in medical imaging: a modelling study(Elsevier, 2022-06) Gichoya, Judy Wawira; Banerjee, Imon; Bhimireddy, Ananth Reddy; Burns, John L.; Celi, Leo Anthony; Chen, Li-Ching; Correa, Ramon; Dullerud, Natalie; Ghassemi, Marzyeh; Huang, Shih-Cheng; Kuo, Po-Chih; Lungren, Matthew P.; Palmer, Lyle J.; Price, Brandon J.; Purkayastha, Saptarshi; Pyrros, Ayis T.; Oakden-Rayner, Lauren; Okechukwu, Chima; Seyyed-Kalantari, Laleh; Trivedi, Hari; Wang, Ryan; Zaiman, Zachary; Zhang, Haoran; BioHealth Informatics, School of Informatics and ComputingBackground Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. Methods Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race. Findings In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study. Interpretation The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging. Funding National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology.Item Artificial Intelligence for Global Health: Learning From a Decade of Digital Transformation in Health Care(arXiv, 2020) Mathur, Varoon; Purkayastha, Saptarshi; Gichoya, Judy Wawira; BioHealth Informatics, School of Informatics and ComputingThe health needs of those living in resource-limited settings are a vastly overlooked and understudied area in the intersection of machine learning (ML) and health care. While the use of ML in health care is more recently popularized over the last few years from the advancement of deep learning, low-and-middle income countries (LMICs) have already been undergoing a digital transformation of their own in health care over the last decade, leapfrogging milestones due to the adoption of mobile health (mHealth). With the introduction of new technologies, it is common to start afresh with a top-down approach, and implement these technologies in isolation, leading to lack of use and a waste of resources. In this paper, we outline the necessary considerations both from the perspective of current gaps in research, as well as from the lived experiences of health care professionals in resource-limited settings. We also outline briefly several key components of successful implementation and deployment of technologies within health systems in LMICs, including technical and cultural considerations in the development process relevant to the building of machine learning solutions. We then draw on these experiences to address where key opportunities for impact exist in resource-limited settings, and where AI/ML can provide the most benefit.Item Current Clinical Applications of Artificial Intelligence in Radiology and Their Best Supporting Evidence(Elsevier, 2020-11) Tariq, Amara; Purkayastha, Saptarshi; Padmanaban, Geetha Priya; Krupinski, Elizabeth; Trivedi, Hari; Banerjee, Imon; Gichoya, Judy W.; BioHealth Informatics, School of Informatics and ComputingPurpose Despite tremendous gains from deep learning and the promise of artificial intelligence (AI) in medicine to improve diagnosis and save costs, there exists a large translational gap to implement and use AI products in real-world clinical situations. Adoption of standards such as Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis, Consolidated Standards of Reporting Trials, and the Checklist for Artificial Intelligence in Medical Imaging is increasing to improve the peer-review process and reporting of AI tools. However, no such standards exist for product-level review. Methods A review of clinical trials showed a paucity of evidence for radiology AI products; thus, the authors developed a 10-question assessment tool for reviewing AI products with an emphasis on their validation and result dissemination. The assessment tool was applied to commercial and open-source algorithms used for diagnosis to extract evidence on the clinical utility of the tools. Results There is limited technical information on methodologies for FDA-approved algorithms compared with open-source products, likely because of intellectual property concerns. Furthermore, FDA-approved products use much smaller data sets compared with open-source AI tools, because the terms of use of public data sets are limited to academic and noncommercial entities, which precludes their use in commercial products. Conclusions Overall, this study reveals a broad spectrum of maturity and clinical use of AI products, but a large gap exists in exploring actual performance of AI tools in clinical practice.Item Phronesis of AI in radiology: Superhuman meets natural stupidity(arXiv, 2018) Gichoya, Judy W.; Nuthakki, Siddhartha; Maity, Pallavi G.; Purkayastha, Saptarshi; BioHealth Informatics, School of Informatics and ComputingAdvances in AI in the last decade have clearly made economists, politicians, journalists, and citizenry in general believe that the machines are coming to take human jobs. We review 'superhuman' AI performance claims in radiology and then provide a self-reflection on our own work in the area in the form of a critical review, a tribute of sorts to McDermotts 1976 paper, asking the field for some self-discipline. Clearly there is an opportunity to replace humans, but there are better opportunities, as we have discovered to fit cognitive abilities of human and non-humans. We performed one of the first studies in radiology to see how human and AI performance can complement and improve each others performance for detecting pneumonia in chest X-rays. We question if there is a practical wisdom or phronesis that we need to demonstrate in AI today as well as in our field. Using this, we articulate what AI as a field has already and probably can in the future learn from Psychology, Cognitive Science, Sociology and Science and Technology Studies.Item Reading Race: AI Recognises Patient's Racial Identity In Medical Images(arXiv, 2021) Banerjee, Imon; Bhimireddy, Ananth Reddy; Burns, John L.; Celi, Leo Anthony; Chen, Li-Ching; Correa, Ramon; Dullerud, Natalie; Ghassemi, Marzyeh; Huang, Shih-Cheng; Kuo, Po-Chih; Lungren, Matthew P.; Palmer, Lyle; Price, Brandon J.; Purkayastha, Saptarshi; Pyrros, Ayis; Oakden-Rayner, Luke; Okechukwu, Chima; Seyyed-Kalantari, Laleh; Trivedi, Hari; Wang, Ryan; Zaiman, Zachary; Zhang, Haoran; Gichoya, Judy W.; BioHealth Informatics, School of Informatics and ComputingBackground: In medical imaging, prior studies have demonstrated disparate AI performance by race, yet there is no known correlation for race on medical imaging that would be obvious to the human expert interpreting the images. Methods: Using private and public datasets we evaluate: A) performance quantification of deep learning models to detect race from medical images, including the ability of these models to generalize to external environments and across multiple imaging modalities, B) assessment of possible confounding anatomic and phenotype population features, such as disease distribution and body habitus as predictors of race, and C) investigation into the underlying mechanism by which AI models can recognize race. Findings: Standard deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities. Our findings hold under external validation conditions, as well as when models are optimized to perform clinically motivated tasks. We demonstrate this detection is not due to trivial proxies or imaging-related surrogate covariates for race, such as underlying disease distribution. Finally, we show that performance persists over all anatomical regions and frequency spectrum of the images suggesting that mitigation efforts will be challenging and demand further study. Interpretation: We emphasize that model ability to predict self-reported race is itself not the issue of importance. However, our findings that AI can trivially predict self-reported race -- even from corrupted, cropped, and noised medical images -- in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.