- Browse by Subject
Browsing by Subject "artificial intelligence"
Now showing 1 - 10 of 27
Results Per Page
Sort Options
Item AI for infectious disease modelling and therapeutics(World Scientific, 2020-11) Alterovitz, Gil; Alterovitz, Wei-Lun; Cassell, Gail H.; Zhang, Lixin; Dunker, A. Keith; Biochemistry and Molecular Biology, School of MedicineAI for infectious disease modelling and therapeutics is an emerging area that leverages new computational approaches and data in this area. Genomics, proteomics, biomedical literature, social media, and other resources are proving to be critical tools to help understand and solve complicated issues ranging from understanding the process of infection, diagnosis and discovery of the precise molecular details, to developing possible interventions and safety profiling of possible treatments.Item AI recognition of patient race in medical imaging: a modelling study(Elsevier, 2022-06) Gichoya, Judy Wawira; Banerjee, Imon; Bhimireddy, Ananth Reddy; Burns, John L.; Celi, Leo Anthony; Chen, Li-Ching; Correa, Ramon; Dullerud, Natalie; Ghassemi, Marzyeh; Huang, Shih-Cheng; Kuo, Po-Chih; Lungren, Matthew P.; Palmer, Lyle J.; Price, Brandon J.; Purkayastha, Saptarshi; Pyrros, Ayis T.; Oakden-Rayner, Lauren; Okechukwu, Chima; Seyyed-Kalantari, Laleh; Trivedi, Hari; Wang, Ryan; Zaiman, Zachary; Zhang, Haoran; BioHealth Informatics, School of Informatics and ComputingBackground Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. Methods Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race. Findings In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study. Interpretation The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging. Funding National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology.Item AIMS Philanthropy Project: Studying AI, Machine Learning & Data Science Technology for Good(Indiana University Lilly Family School of Philanthropy and Indiana University School of Informatics and Computing, IUPUI, Indianapolis, IN., 2021-02-07) Herzog, Patricia Snell; Naik, Harshal R.; Khan, Haseeb A.This project investigates philanthropic activities related to Artificial Intelligence, Machine Learning, and Data Science technology (AIMS). Advances in AIMS technology are impacting the field of philanthropy in substantial ways. This report focuses on methods employed in analyzing and visualizing five data sources: Open Philanthropy grants database, Rockefeller Foundation grants database, Chronicle of Philanthropy article database, GuideStar Nonprofit Database, and Google AI for Social Good grant awardees. The goal was to develop an accessible website platform that engaged human-centered UX user experience design techniques to present information about AIMS Philanthropy (https://www.aims-phil.org/). Each dataset was analyzed for a set of general questions that could be answered visually. The visuals aim to provide answers to these two primary questions: (1) How much funding was invested in AIMS? and (2) What focus areas, applications, discovery, or other purposes was AIMS-funded directed toward? Cumulatively, this project identified 325 unique organizations with a total of $2.6 billion in funding for AIMS philanthropy.Item Artificial Intelligence for Global Health: Learning From a Decade of Digital Transformation in Health Care(arXiv, 2020) Mathur, Varoon; Purkayastha, Saptarshi; Gichoya, Judy Wawira; BioHealth Informatics, School of Informatics and ComputingThe health needs of those living in resource-limited settings are a vastly overlooked and understudied area in the intersection of machine learning (ML) and health care. While the use of ML in health care is more recently popularized over the last few years from the advancement of deep learning, low-and-middle income countries (LMICs) have already been undergoing a digital transformation of their own in health care over the last decade, leapfrogging milestones due to the adoption of mobile health (mHealth). With the introduction of new technologies, it is common to start afresh with a top-down approach, and implement these technologies in isolation, leading to lack of use and a waste of resources. In this paper, we outline the necessary considerations both from the perspective of current gaps in research, as well as from the lived experiences of health care professionals in resource-limited settings. We also outline briefly several key components of successful implementation and deployment of technologies within health systems in LMICs, including technical and cultural considerations in the development process relevant to the building of machine learning solutions. We then draw on these experiences to address where key opportunities for impact exist in resource-limited settings, and where AI/ML can provide the most benefit.Item Artificial Intelligence Improves Detection at Colonoscopy: Why aren’t we all already using it?(ScienceDirect, 2022) Rex, Douglas K.; Berzin, Tyler M.; Mori, Yuichi; Medicine, School of MedicineItem Automated lesion detection of breast cancer in [18F] FDG PET/CT using a novel AI-Based workflow(Frontiers, 2022-11-14) Leal, Jeffrey P.; Rowe, Steven P.; Stearns, Vered; Connolly, Roisin M.; Vaklavas, Christos; Liu, Minetta C.; Storniolo, Anna Maria; Wahl, Richard L.; Pomper, Martin G.; Solnes, Lilja B.; Medicine, School of MedicineApplications based on artificial intelligence (AI) and deep learning (DL) are rapidly being developed to assist in the detection and characterization of lesions on medical images. In this study, we developed and examined an image-processing workflow that incorporates both traditional image processing with AI technology and utilizes a standards-based approach for disease identification and quantitation to segment and classify tissue within a whole-body [18F]FDG PET/CT study. Methods One hundred thirty baseline PET/CT studies from two multi-institutional preoperative clinical trials in early-stage breast cancer were semi-automatically segmented using techniques based on PERCIST v1.0 thresholds and the individual segmentations classified as to tissue type by an experienced nuclear medicine physician. These classifications were then used to train a convolutional neural network (CNN) to automatically accomplish the same tasks. Results Our CNN-based workflow demonstrated Sensitivity at detecting disease (either primary lesion or lymphadenopathy) of 0.96 (95% CI [0.9, 1.0], 99% CI [0.87,1.00]), Specificity of 1.00 (95% CI [1.0,1.0], 99% CI [1.0,1.0]), DICE score of 0.94 (95% CI [0.89, 0.99], 99% CI [0.86, 1.00]), and Jaccard score of 0.89 (95% CI [0.80, 0.98], 99% CI [0.74, 1.00]). Conclusion This pilot work has demonstrated the ability of AI-based workflow using DL-CNNs to specifically identify breast cancer tissue as determined by [18F]FDG avidity in a PET/CT study. The high sensitivity and specificity of the network supports the idea that AI can be trained to recognize specific tissue signatures, both normal and disease, in molecular imaging studies using radiopharmaceuticals. Future work will explore the applicability of these techniques to other disease types and alternative radiotracers, as well as explore the accuracy of fully automated and quantitative detection and response assessment.Item Beyond Clinical Accuracy: Considerations for the use of Generative AI Models in Gastrointestinal Care(AGA, 2023-08) Feldman, Keith; Nehme, Fredy; Medicine, School of MedicineItem Can we do resect and discard with artificial intelligence-assisted colon polyp “optical biopsy?”(Elsevier, 2019) Rex, Douglas K.; Medicine, School of MedicineResect and discard refers to a paradigm for the management of colorectal adenomas 1-5 mm in size. In this paradigm, histology of colorectal polyps is predicted endoscopically based on surface features. Lesions that are ≤5 mm in size and predicted to be adenomas are resected endoscopically and discarded rather than submitted to pathology. Adenomas in this size range have an extremely low risk of cancer, and the cost savings of the resect and discard paradigm would be substantial. Artificial intelligence programs can improve the overall prediction for histology based on endoscopic imaging, and reduce operator dependence in endoscopic predictions. Although meta-analyses have concluded that the accuracy of endoscopic prediction is sufficiently high to institute the resect and discard paradigm in clinical practice, actual implementation has faced several obstacles. These include lack of financial incentives for endoscopists, perceived increased medical-legal risk compared with the current management paradigm of submitting all polyps to pathology, and local rules for tissue handling.Item Clinical Features Distinguishing Diabetic Retinopathy Severity Using Artificial Intelligence(2022-07-29) Happe, Michael; Gill, Hunter; Salem, Doaa Hassan; Janga, Sarath Chandra; Hajrasouliha, AmirBACKGROUND AND HYPOTHESIS: 1 in 29 American diabetics suffer from diabetic retinopathy (DR), the weakening of blood vessels in the retina. DR goes undetected in nearly 50% of diabetics, allowing DR to steal the vision of many Americans. We hypothesize that increasing the rate and ease of diagnosing DR by introducing artificial intelligence-based methods in primary medical clinics will increase the long-term preservation of ocular health in diabetic patients. PROJECT METHODS: This retrospective cohort study was conducted under approval from the Institutional Review Board of Indiana University School of Medicine. Images were deidentified and no consent was taken due to the nature of this retrospective study. We categorized 676 patient files based upon HbA1c, severity of non-proliferative diabetic retinopathy (NPDR), and proliferative diabetic retinopathy (PDR). Retinal images were annotated to identify common features of DR: microaneurysms, hemorrhages, cotton wool spots, exudates, and neovascularization. The VGG Image Annotator application used for annotations allowed us to save structure coordinates into a separate database for future training of the artificial intelligence system. RESULTS: 228 (33.7%) of patients were diagnosed with diabetes, and 143 (62.7%) of those were diagnosed with DR. Two-sample t tests found significant differences between the HbA1c values of all diabetics compared to diabetics without retinopathy (p<0.007) and between all severities of DR versus diabetics without retinopathy (p<0.002). 283 eyes were diagnosed with a form of DR in this study: 37 mild NPDR, 42 moderate NPDR, 56 severe NPDR, and 148 PDR eyes. POTENTIAL IMPACT: With the dataset of coordinates and HbA1c values from this experiment, we aim to train an artificial intelligence system to diagnose DR through retinal imaging. The goal of this system is to be conveniently used in primary medical clinics to increase the detection rate of DR to preserve the ocular health of millions of future Americans.Item Comparative Performance of Artificial Intelligence Optical Diagnosis Systems for Leaving in Situ Colorectal Polyps(Elsevier, 2023-03) Hassan, Cesare; Sharma, Prateek; Mori, Yuichi; Bretthauer, Michael; Rex, Douglas K.; COMBO Study Group; Repici, Alessandro; Medicine, School of Medicine
- «
- 1 (current)
- 2
- 3
- »