- Browse by Author
Browsing by Author "Tariq, Amara"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Current Clinical Applications of Artificial Intelligence in Radiology and Their Best Supporting Evidence(Elsevier, 2020-11) Tariq, Amara; Purkayastha, Saptarshi; Padmanaban, Geetha Priya; Krupinski, Elizabeth; Trivedi, Hari; Banerjee, Imon; Gichoya, Judy W.; BioHealth Informatics, School of Informatics and ComputingPurpose Despite tremendous gains from deep learning and the promise of artificial intelligence (AI) in medicine to improve diagnosis and save costs, there exists a large translational gap to implement and use AI products in real-world clinical situations. Adoption of standards such as Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis, Consolidated Standards of Reporting Trials, and the Checklist for Artificial Intelligence in Medical Imaging is increasing to improve the peer-review process and reporting of AI tools. However, no such standards exist for product-level review. Methods A review of clinical trials showed a paucity of evidence for radiology AI products; thus, the authors developed a 10-question assessment tool for reviewing AI products with an emphasis on their validation and result dissemination. The assessment tool was applied to commercial and open-source algorithms used for diagnosis to extract evidence on the clinical utility of the tools. Results There is limited technical information on methodologies for FDA-approved algorithms compared with open-source products, likely because of intellectual property concerns. Furthermore, FDA-approved products use much smaller data sets compared with open-source AI tools, because the terms of use of public data sets are limited to academic and noncommercial entities, which precludes their use in commercial products. Conclusions Overall, this study reveals a broad spectrum of maturity and clinical use of AI products, but a large gap exists in exploring actual performance of AI tools in clinical practice.Item Patient-specific COVID-19 resource utilization prediction using fusion AI model(Springer Nature, 2021-06-03) Tariq, Amara; Celi, Leo Anthony; Newsome, Janice M.; Purkayastha, Saptarshi; Bhatia, Neal Kumar; Trivedi, Hari; Wawira Gichoya, Judy; Banerjee, Imon; BioHealth Informatics, School of Informatics and ComputingThe strain on healthcare resources brought forth by the recent COVID-19 pandemic has highlighted the need for efficient resource planning and allocation through the prediction of future consumption. Machine learning can predict resource utilization such as the need for hospitalization based on past medical data stored in electronic medical records (EMR). We conducted this study on 3194 patients (46% male with mean age 56.7 (±16.8), 56% African American, 7% Hispanic) flagged as COVID-19 positive cases in 12 centers under Emory Healthcare network from February 2020 to September 2020, to assess whether a COVID-19 positive patient’s need for hospitalization can be predicted at the time of RT-PCR test using the EMR data prior to the test. Five main modalities of EMR, i.e., demographics, medication, past medical procedures, comorbidities, and laboratory results, were used as features for predictive modeling, both individually and fused together using late, middle, and early fusion. Models were evaluated in terms of precision, recall, F1-score (within 95% confidence interval). The early fusion model is the most effective predictor with 84% overall F1-score [CI 82.1–86.1]. The predictive performance of the model drops by 6 % when using recent clinical data while omitting the long-term medical history. Feature importance analysis indicates that history of cardiovascular disease, emergency room visits in the past year prior to testing, and demographic factors are predictive of the disease trajectory. We conclude that fusion modeling using medical history and current treatment data can forecast the need for hospitalization for patients infected with COVID-19 at the time of the RT-PCR test.Item Was there COVID-19 back in 2012? Challenge for AI in Diagnosis with Similar Indications(2020-06-23) Banerjee, Imon; Sinha, Priyanshu; Purkayastha, Saptarshi; Mashhaditafreshi, Nazanin; Tariq, Amara; Jeong, Jiwoong; Trivedi, Hari; Gichoya, Judy W.; BioHealth Informatics, School of Informatics and ComputingPurpose: Since the recent COVID-19 outbreak, there has been an avalanche of research papers applying deep learning based image processing to chest radiographs for detection of the disease. To test the performance of the two top models for CXR COVID-19 diagnosis on external datasets to assess model generalizability. Methods: In this paper, we present our argument regarding the efficiency and applicability of existing deep learning models for COVID-19 diagnosis. We provide results from two popular models - COVID-Net and CoroNet evaluated on three publicly available datasets and an additional institutional dataset collected from EMORY Hospital between January and May 2020, containing patients tested for COVID-19 infection using RT-PCR. Results: There is a large false positive rate (FPR) for COVID-Net on both ChexPert (55.3%) and MIMIC-CXR (23.4%) dataset. On the EMORY Dataset, COVID-Net has 61.4% sensitivity, 0.54 F1-score and 0.49 precision value. The FPR of the CoroNet model is significantly lower across all the datasets as compared to COVID-Net - EMORY(9.1%), ChexPert (1.3%), ChestX-ray14 (0.02%), MIMIC-CXR (0.06%). Conclusion: The models reported good to excellent performance on their internal datasets, however we observed from our testing that their performance dramatically worsened on external data. This is likely from several causes including overfitting models due to lack of appropriate control patients and ground truth labels. The fourth institutional dataset was labeled using RT-PCR, which could be positive without radiographic findings and vice versa. Therefore, a fusion model of both clinical and radiographic data may have better performance and generalization.