- Browse by Author
Browsing by Author "Sinha, Priyanshu"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Blood Glucose Level Prediction as Time-Series Modeling using Sequence-to-Sequence Neural Networks(CEUR Workshop Proceedings, 2020-08) Bhimireddy, Ananth; Sinha, Priyanshu; Oluwalade, Bolu; Gichoya, Judy Wawira; Purkayastha, Saptarshi; BioHealth Informatics, School of Informatics and ComputingThe management of blood glucose levels is critical in the care of Type 1 diabetes subjects. In extremes, high or low levels of blood glucose are fatal. To avoid such adverse events, there is the development and adoption of wearable technologies that continuously monitor blood glucose and administer insulin. This technology allows subjects to easily track their blood glucose levels with early intervention without the need for hospital visits. The data collected from these sensors is an excellent candidate for the application of machine learning algorithms to learn patterns and predict future values of blood glucose levels. In this study, we developed artificial neural network algorithms based on the OhioT1DM training dataset that contains data on 12 subjects. The dataset contains features such as subject identifiers, continuous glucose monitoring data obtained in 5 minutes intervals, insulin infusion rate, etc. We developed individual models, including LSTM, BiLSTM, Convolutional LSTMs, TCN, and sequence-to-sequence models. We also developed transfer learning models based on the most important features of the data, as identified by a gradient boosting algorithm. These models were evaluated on the OhioT1DM test dataset that contains 6 unique subject’s data. The model with the lowest RMSE values for the 30- and 60-minutes was selected as the best performing model. Our result shows that sequence-to-sequence BiLSTM performed better than the other models. This work demonstrates the potential of artificial neural networks algorithms in the management of Type 1 diabetes.Item A DICOM Framework for Machine Learning and Processing Pipelines Against Real-time Radiology Images(SpringerLink, 2021-08-17) Kathiravelu, Pradeeban; Sharma, Puneet; Sharma, Ashish; Banerjee, Imon; Trivedi, Hari; Purkayastha, Saptarshi; Sinha, Priyanshu; Cadrin‑Chenevert, Alexandre; Safdar, Nabile; Wawira Gichoya, Judy; BioHealth Informatics, School of Informatics and ComputingReal-time execution of machine learning (ML) pipelines on radiology images is difficult due to limited computing resources in clinical environments, whereas running them in research clusters requires efficient data transfer capabilities. We developed Niffler, an open-source Digital Imaging and Communications in Medicine (DICOM) framework that enables ML and processing pipelines in research clusters by efficiently retrieving images from the hospitals’ PACS and extracting the metadata from the images. We deployed Niffler at our institution (Emory Healthcare, the largest healthcare network in the state of Georgia) and retrieved data from 715 scanners spanning 12 sites, up to 350 GB/day continuously in real-time as a DICOM data stream over the past 2 years. We also used Niffler to retrieve images bulk on-demand based on user-provided filters to facilitate several research projects. This paper presents the architecture and three such use cases of Niffler. First, we executed an IVC filter detection and segmentation pipeline on abdominal radiographs in real-time, which was able to classify 989 test images with an accuracy of 96.0%. Second, we applied the Niffler Metadata Extractor to understand the operational efficiency of individual MRI systems based on calculated metrics. We benchmarked the accuracy of the calculated exam time windows by comparing Niffler against the Clinical Data Warehouse (CDW). Niffler accurately identified the scanners’ examination timeframes and idling times, whereas CDW falsely depicted several exam overlaps due to human errors. Third, with metadata extracted from the images by Niffler, we identified scanners with misconfigured time and reconfigured five scanners. Our evaluations highlight how Niffler enables real-time ML and processing pipelines in a research cluster.Item Energy Efficiency of Quantized Neural Networks in Medical Imaging(2022-04) Sinha, Priyanshu; Tummala, Sai Sreya; Purkayastha, Saptarshi; Gichoya, Judy W.; BioHealth Informatics, School of Informatics and ComputingThe main goal of this paper is to compare the energy efficiency of quantized neural networks to perform medical image analysis on different processors and neural network architectures. Deep neural networks have demonstrated outstanding performance in medical image analysis but require high computation and power usage. In our work, we review the power usage and temperature of processors when running Resnet and Unet architectures to perform image classification and segmentation respectively. We compare Edge TPU, Jetson Nano, Apple M1, Nvidia Quadro P6000 and Nvidia A6000 to infer using full-precision FP32 and quantized INT8 models. The results will be useful for designers and implementers of medical imaging AI on hand-held or edge computing devices.Item Full Training versus Fine Tuning for Radiology Images Concept Detection Task for the ImageCLEF 2019 Challenge(2019) Sinha, Priyanshu; Purkayastha, Saptarshi; Gichoya, Judy; BioHealth Informatics, School of Informatics and ComputingConcept detection from medical images remains a challenging task that limits implementation of clinical ML/AI pipelines because of the scarcity of the highly trained experts to annotate images. There is a need for automated processes that can extract concrete textual information from image data. ImageCLEF 2019 provided us a set of images with labels as UMLS concepts. We participated for the rst time for the concept detection task using transfer learning. Our approach involved an experiment of layerwise ne tuning (full training) versus ne tuning based on previous reported recommendations for training classi cation, detection and segmentation tasks for medical imaging. We ranked number 9 in this year's challenge, with an F1 result of 0.05 after three entries. We had a poor result from performing layerwise tuning (F1 score of 0.014) which is consistent with previous authors who have described the bene t of full training for transfer learning. However when looking at the results by a radiologist, the terms do not make clinical sense and we hypothesize that we can achieve better performance when using medical pretrained image models for example PathNet and utilizing a hierarchical training approach which is the basis of our future work on this dataset.Item Multireader evaluation of radiologist performance for COVID-19 detection on emergency department chest radiographs(Elsevier, 2022-02) Gichoya, Judy W.; Sinha, Priyanshu; Davis, Melissa; Dunkle, Jeffrey W.; Hamlin, Scott A.; Herr, Keith D.; Hoff, Carrie N.; Letter, Haley P.; McAdams, Christopher R.; Puthoff, Gregory D.; Smith, Kevin L.; Steenburg, Scott D.; Banerjee, Imon; Trivedi, Hari; Radiology and Imaging Sciences, School of MedicineBACKGROUND: Chest radiographs (CXR) are frequently used as a screening tool for patients with suspected COVID-19 infection pending reverse transcriptase polymerase chain reaction (RT-PCR) results, despite recommendations against this. We evaluated radiologist performance for COVID-19 diagnosis on CXR at the time of patient presentation in the Emergency Department (ED). MATERIALS AND METHODS: We extracted RT-PCR results, clinical history, and CXRs of all patients from a single institution between March and June 2020. 984 RT-PCR positive and 1043 RT-PCR negative radiographs were reviewed by 10 emergency radiologists from 4 academic centers. 100 cases were read by all radiologists and 1927 cases by 2 radiologists. Each radiologist chose the single best label per case: Normal, COVID-19, Other - Infectious, Other - Noninfectious, Non-diagnostic, and Endotracheal Tube. Cases labeled with endotracheal tube (246) or non-diagnostic (54) were excluded. Remaining cases were analyzed for label distribution, clinical history, and inter-reader agreement. RESULTS: 1727 radiographs (732 RT-PCR positive, 995 RT-PCR negative) were included from 1594 patients (51.2% male, 48.8% female, age 59 ± 19 years). For 89 cases read by all readers, there was poor agreement for RT-PCR positive (Fleiss Score 0.36) and negative (Fleiss Score 0.46) exams. Agreement between two readers on 1638 cases was 54.2% (373/688) for RT-PCR positive cases and 71.4% (679/950) for negative cases. Agreement was highest for RT-PCR negative cases labeled as Normal (50.4%, n = 479). Reader performance did not improve with clinical history or time between CXR and RT-PCR result. CONCLUSION: At the time of presentation to the emergency department, emergency radiologist performance is non-specific for diagnosing COVID-19.Item Optimizing Medical Image Classification Models for Edge Devices(Springer, 2021-09) Abid, Areeba; Sinha, Priyanshu; Harpale, Aishwarya; Gichoya, Judy; Purkayastha, Saptarshi; BioHealth Informatics, School of Informatics and ComputingMachine learning algorithms for medical diagnostics often require resource-intensive environments to run, such as expensive cloud servers or high-end GPUs, making these models impractical for use in the field. We investigate the use of model quantization and GPU-acceleration for chest X-ray classification on edge devices. We employ 3 types of quantization (dynamic range, float-16, and full int8) which we tested on models trained on the Chest-XRay14 Dataset. We achieved a 2–4x reduction in model size, offset by small decreases in the mean AUC-ROC score of 0.0%–0.9%. On ARM architectures, integer quantization was shown to improve inference latency by up to 57%. However, we also observe significant increases in latency on x86 processors. GPU acceleration also improved inference latency, but this was outweighed by kernel launch overhead. We show that optimization of diagnostic models has the potential to expand their utility to day-to-day devices used by patients and healthcare workers; however, these improvements are context- and architecture-dependent and should be tested on the relevant devices before deployment in low-resource environments.Item Using ImageBERT to improve performance of multi-class Chest Xray classification(2020-07-02) Purkayastha, Saptarshi; Bhimireddy, Ananth; Sinha, Priyanshu; Gichoya, Judy W.Pulmonary edema is a medical condition that is often related to life-threatening heart-related complications. Several recent studies have demonstrated that machine learning models using deep learning (DL) methods are able to identify anomalies on chest X-rays (CXR) as well as trained radiologists. Yet, there are limited/no studies that have integrated these models in clinical radiology workflows. The objective of this project is to identify state-of-the-art DL algorithms and integrate the classification results into the radiology workflow, more specifically in a DICOM Viewer, so that radiologists can use it as a clinical decision support. Our proof-of-concept (POC) is to detect the presence/absence of edema in chest radiographs obtained from the CheXpert dataset. We implemented the state-of-the-art deep learning methods for image classification -ResNet50, VGG16 and Inception v4 using the FastAI library and PyTorch on 77,408 CXR which have classified the presence/absence of edema in the images with an accuracy of 65%, 70% and 65% respectively on a test dataset of about 202 images. The CXR were converted to DICOM format using the img2dcm utility of DICOM ToolKit (DCMTK), and later uploaded to the Orthanc PACS, which was connected to the OHIF Viewer. This is the first study that has integrated the machine learning outcomes into the clinical workflow in order to improve the decision-making process by implementing object detection and instance segmentation algorithms.Item Was there COVID-19 back in 2012? Challenge for AI in Diagnosis with Similar Indications(2020-06-23) Banerjee, Imon; Sinha, Priyanshu; Purkayastha, Saptarshi; Mashhaditafreshi, Nazanin; Tariq, Amara; Jeong, Jiwoong; Trivedi, Hari; Gichoya, Judy W.; BioHealth Informatics, School of Informatics and ComputingPurpose: Since the recent COVID-19 outbreak, there has been an avalanche of research papers applying deep learning based image processing to chest radiographs for detection of the disease. To test the performance of the two top models for CXR COVID-19 diagnosis on external datasets to assess model generalizability. Methods: In this paper, we present our argument regarding the efficiency and applicability of existing deep learning models for COVID-19 diagnosis. We provide results from two popular models - COVID-Net and CoroNet evaluated on three publicly available datasets and an additional institutional dataset collected from EMORY Hospital between January and May 2020, containing patients tested for COVID-19 infection using RT-PCR. Results: There is a large false positive rate (FPR) for COVID-Net on both ChexPert (55.3%) and MIMIC-CXR (23.4%) dataset. On the EMORY Dataset, COVID-Net has 61.4% sensitivity, 0.54 F1-score and 0.49 precision value. The FPR of the CoroNet model is significantly lower across all the datasets as compared to COVID-Net - EMORY(9.1%), ChexPert (1.3%), ChestX-ray14 (0.02%), MIMIC-CXR (0.06%). Conclusion: The models reported good to excellent performance on their internal datasets, however we observed from our testing that their performance dramatically worsened on external data. This is likely from several causes including overfitting models due to lack of appropriate control patients and ground truth labels. The fourth institutional dataset was labeled using RT-PCR, which could be positive without radiographic findings and vice versa. Therefore, a fusion model of both clinical and radiographic data may have better performance and generalization.