- Browse by Subject
Browsing by Subject "Deep learning"
Now showing 1 - 10 of 45
Results Per Page
Sort Options
Item A deep learning framework for automated classification of histopathological kidney whole-slide images(Elsevier, 2022-04-18) Abdeltawab, Hisham A.; Khalifa, Fahmi A.; Ghazal, Mohammed A.; Cheng, Liang; El-Baz, Ayman S.; Gondim, Dibson D.; Pathology and Laboratory Medicine, School of MedicineBackground: Renal cell carcinoma is the most common type of malignant kidney tumor and is responsible for 14,830 deaths per year in the United States. Among the four most common subtypes of renal cell carcinoma, clear cell renal cell carcinoma has the worst prognosis and clear cell papillary renal cell carcinoma appears to have no malignant potential. Distinction between these two subtypes can be difficult due to morphologic overlap on examination of histopathological preparation stained with hematoxylin and eosin. Ancillary techniques, such as immunohistochemistry, can be helpful, but they are not universally available. We propose and evaluate a new deep learning framework for tumor classification tasks to distinguish clear cell renal cell carcinoma from papillary renal cell carcinoma. Methods: Our deep learning framework is composed of three convolutional neural networks. We divided whole-slide kidney images into patches with three different sizes where each network processes a specific patch size. Our framework provides patchwise and pixelwise classification. The histopathological kidney data is composed of 64 image slides that belong to 4 categories: fat, parenchyma, clear cell renal cell carcinoma, and clear cell papillary renal cell carcinoma. The final output of our framework is an image map where each pixel is classified into one class. To maintain consistency, we processed the map with Gauss-Markov random field smoothing. Results: Our framework succeeded in classifying the four classes and showed superior performance compared to well-established state-of-the-art methods (pixel accuracy: 0.89 ResNet18, 0.92 proposed). Conclusions: Deep learning techniques have a significant potential for cancer diagnosis.Item A Patch-Wise Deep Learning Approach for Myocardial Blood Flow Quantification with Robustness to Noise and Nonrigid Motion(IEEE, 2021) Youssef, Khalid; Heydari, Bobby; Rivero, Luis Zamudio; Beaulieu, Taylor; Cheema, Karandeep; Dharmakumar, Rohan; Sharif, Behzad; Medicine, School of MedicineQuantitative analysis of dynamic contrast-enhanced cardiovascular MRI (cMRI) datasets enables the assessment of myocardial blood flow (MBF) for objective evaluation of ischemic heart disease in patients with suspected coronary artery disease. State-of-the-art MBF quantification techniques use constrained deconvolution and are highly sensitive to noise and motion-induced errors, which can lead to unreliable outcomes in the setting of high-resolution MBF mapping. To overcome these limitations, recent iterative approaches incorporate spatial-smoothness constraints to tackle pixel-wise MBF mapping. However, such iterative methods require a computational time of up to 30 minutes per acquired myocardial slice, which is a major practical limitation. Furthermore, they cannot enforce robustness to residual nonrigid motion which can occur in clinical stress/rest studies of patients with arrhythmia. We present a non-iterative patch-wise deep learning approach for pixel-wise MBF quantification wherein local spatio-temporal features are learned from a large dataset of myocardial patches acquired in clinical stress/rest cMRI studies. Our approach is scanner-independent, computationally efficient, robust to noise, and has the unique feature of robustness to motion-induced errors. Numerical and experimental results obtained using real patient data demonstrate the effectiveness of our approach.Clinical Relevance- The proposed patch-wise deep learning approach significantly improves the reliability of high-resolution myocardial blood flow quantification in cMRI by improving its robustness to noise and nonrigid myocardial motion and is up to 300-fold faster than state-of-the-art iterative approaches.Item A review of deep learning and radiomics approaches for pancreatic cancer diagnosis from medical imaging(Wolters Kluwer, 2023) Yao, Lanhong; Zhang, Zheyuan; Keles, Elif; Yazici, Cemal; Tirkes, Temel; Bagco, Ulas; Radiology and Imaging Sciences, School of MedicinePurpose of review: Early and accurate diagnosis of pancreatic cancer is crucial for improving patient outcomes, and artificial intelligence (AI) algorithms have the potential to play a vital role in computer-aided diagnosis of pancreatic cancer. In this review, we aim to provide the latest and relevant advances in AI, specifically deep learning (DL) and radiomics approaches, for pancreatic cancer diagnosis using cross-sectional imaging examinations such as computed tomography (CT) and magnetic resonance imaging (MRI). Recent findings: This review highlights the recent developments in DL techniques applied to medical imaging, including convolutional neural networks (CNNs), transformer-based models, and novel deep learning architectures that focus on multitype pancreatic lesions, multiorgan and multitumor segmentation, as well as incorporating auxiliary information. We also discuss advancements in radiomics, such as improved imaging feature extraction, optimized machine learning classifiers and integration with clinical data. Furthermore, we explore implementing AI-based clinical decision support systems for pancreatic cancer diagnosis using medical imaging in practical settings. Summary: Deep learning and radiomics with medical imaging have demonstrated strong potential to improve diagnostic accuracy of pancreatic cancer, facilitate personalized treatment planning, and identify prognostic and predictive biomarkers. However, challenges remain in translating research findings into clinical practice. More studies are required focusing on refining these methods, addressing significant limitations, and developing integrative approaches for data analysis to further advance the field of pancreatic cancer diagnosis.Item Advanced natural language processing and temporal mining for clinical discovery(2015-08-17) Mehrabi, Saeed; Jones, Josette F.; Palakal, Mathew J.; Chien, Stanley Yung-Ping; Liu, Xiaowen; Schmidt, C. MaxThere has been vast and growing amount of healthcare data especially with the rapid adoption of electronic health records (EHRs) as a result of the HITECH act of 2009. It is estimated that around 80% of the clinical information resides in the unstructured narrative of an EHR. Recently, natural language processing (NLP) techniques have offered opportunities to extract information from unstructured clinical texts needed for various clinical applications. A popular method for enabling secondary uses of EHRs is information or concept extraction, a subtask of NLP that seeks to locate and classify elements within text based on the context. Extraction of clinical concepts without considering the context has many complications, including inaccurate diagnosis of patients and contamination of study cohorts. Identifying the negation status and whether a clinical concept belongs to patients or his family members are two of the challenges faced in context detection. A negation algorithm called Dependency Parser Negation (DEEPEN) has been developed in this research study by taking into account the dependency relationship between negation words and concepts within a sentence using the Stanford Dependency Parser. The study results demonstrate that DEEPEN, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs. Additionally, an NLP system consisting of section segmentation and relation discovery was developed to identify patients' family history. To assess the generalizability of the negation and family history algorithm, data from a different clinical institution was used in both algorithm evaluations.Item Adversarial Attacks on Deep Temporal Point Process(IEEE, 2022) Khorshidi, Samira; Wang, Bao; Mohler, George; Computer and Information Science, School of ScienceTemporal point processes have many applications, from crime forecasting to modeling earthquake aftershocks sequences. Due to the flexibility and expressiveness of deep learning, neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the robustness of such models in regards to adversarial attacks and natural shocks to systems. Precisely, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. Current work proposes several white-box and blackbox adversarial attacks against temporal point processes modeled by deep neural networks. Extensive experiments confirm that predictive performance and parametric modeling of neural point processes are vulnerable to adversarial attacks. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes dataset, during the Covid-19 pandemic, as an example.Item AI in Medical Imaging Informatics: Current Challenges and Future Directions(IEEE, 2020-07) Panayides, Andreas S.; Amini, Amir; Filipovic, Nenad D.; Sharma, Ashish; Tsaftaris, Sotirios A.; Young, Alistair; Foran, David; Do, Nhan; Golemati, Spyretta; Kurc, Tahsin; Huang, Kun; Nikita, Konstantina S.; Veasey, Ben P.; Zervakis, Michalis; Saltz, Joel H.; Pattichis, Constantinos S.; Biostatistics & Health Data Science, School of MedicineThis paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.Item Annotating and Detecting Topics in Social Media Forum and Modelling the Annotation to Derive Directions-A Case Study(Research Square, 2021) B., Athira; Jones, Josette; Idicula, Sumam Mary; Kulanthaivel, Anand; Zhang, Enming; BioHealth Informatics, School of Informatics and ComputingThe widespread influence of social media impacts every aspect of life, including the healthcare sector. Although medics and health professionals are the final decision makers, the advice and recommendations obtained from fellow patients are significant. In this context, the present paper explores the topics of discussion posted by breast cancer patients and survivors on online forums. The study examines an online forum, Breastcancer.org, maps the discussion entries to several topics, and proposes a machine learning model based on a classification algorithm to characterize the topics. To explore the topics of breast cancer patients and survivors, approximately 1000 posts are selected and manually labeled with annotations. In contrast, millions of posts are available to build the labels. A semi-supervised learning technique is used to build the labels for the unlabeled data; hence, the large data are classified using a deep learning algorithm. The deep learning algorithm BiLSTM with BERT word embedding technique provided a better f1-score of 79.5%. This method is able to classify the following topics: medication reviews, clinician knowledge, various treatment options, seeking and providing support, diagnostic procedures, financial issues and implications for everyday life. What matters the most for the patients is coping with everyday living as well as seeking and providing emotional and informational support. The approach and findings show the potential of studying social media to provide insight into patients' experiences with cancer like critical health problems.Item Artificial intelligence in gastrointestinal endoscopy: a comprehensive review(Hellenic Society of Gastroenterology, 2024) Ali, Hassam; Muzammil, Muhammad Ali; Dahiya, Dushyant Singh; Ali, Farishta; Yasin, Shafay; Hanif, Waqar; Gangwani, Manesh Kumar; Aziz, Muhammad; Khalaf, Muhammad; Basuli, Debargha; Al-Haddad, Mohammad; Medicine, School of MedicineIntegrating artificial intelligence (AI) into gastrointestinal (GI) endoscopy heralds a significant leap forward in managing GI disorders. AI-enabled applications, such as computer-aided detection and computer-aided diagnosis, have significantly advanced GI endoscopy, improving early detection, diagnosis and personalized treatment planning. AI algorithms have shown promise in the analysis of endoscopic data, critical in conditions with traditionally low diagnostic sensitivity, such as indeterminate biliary strictures and pancreatic cancer. Convolutional neural networks can markedly improve the diagnostic process when integrated with cholangioscopy or endoscopic ultrasound, especially in the detection of malignant biliary strictures and cholangiocarcinoma. AI's capacity to analyze complex image data and offer real-time feedback can streamline endoscopic procedures, reduce the need for invasive biopsies, and decrease associated adverse events. However, the clinical implementation of AI faces challenges, including data quality issues and the risk of overfitting, underscoring the need for further research and validation. As the technology matures, AI is poised to become an indispensable tool in the gastroenterologist's arsenal, necessitating the integration of robust, validated AI applications into routine clinical practice. Despite remarkable advances, challenges such as operator-dependent accuracy and the need for intricate examinations persist. This review delves into the transformative role of AI in enhancing endoscopic diagnostic accuracy, particularly highlighting its utility in the early detection and personalized treatment of GI diseases.Item Assessment of Deep Learning Methods for Differentiating Autoimmune Disorders in Ultrasound Images(Medical University Publishing House Craiova, 2021) Vasile, Corina Maria; Udriştoiu, Anca Loredana; Ghenea, Alice Elena; Padureanu, Vlad; Udriştoiu, Ştefan; Gruionu, Lucian Gheorghe; Gruionu, Gabriel; Iacob, Andreea Valentina; Popescu, Mihaela; Medicine, School of MedicineAt present, deep learning becomes an important tool in medical image analysis, with good performance in diagnosing, pattern detection, and segmentation. Ultrasound imaging offers an easy and rapid method to detect and diagnose thyroid disorders. With the help of a computer-aided diagnosis (CAD) system based on deep learning, we have the possibility of real-time and non-invasive diagnosing of thyroidal US images. This paper proposed a study based on deep learning with transfer learning for differentiating the thyroidal ultrasound images using image pixels and diagnosis labels as inputs. We trained, assessed, and compared two pre-trained models (VGG-19 and Inception v3) using a dataset of ultrasound images consisting of 2 types of thyroid ultrasound images: autoimmune and normal. The training dataset consisted of 615 thyroid ultrasound images, from which 415 images were diagnosed as autoimmune, and 200 images as normal. The models were assessed using a dataset of 120 images, from which 80 images were diagnosed as autoimmune, and 40 images diagnosed as normal. The two deep learning models obtained very good results, as follows: the pre-trained VGG-19 model obtained 98.60% for the overall test accuracy with an overall specificity of 98.94% and overall sensitivity of 97.97%, while the Inception v3 model obtained 96.4% for the overall test accuracy with an overall specificity of 95.58% and overall sensitivity of 95.58.Item BrcaSeg: A Deep Learning Approach for Tissue Quantification and Genomic Correlations of Histopathological Images(Elsevier, 2021) Lu, Zixiao; Zhan, Xiaohui; Wu, Yi; Cheng, Jun; Shao, Wei; Ni, Dong; Han, Zhi; Zhang, Jie; Feng, Qianjin; Huang, Kun; Medicine, School of MedicineEpithelial and stromal tissues are components of the tumor microenvironment and play a major role in tumor initiation and progression. Distinguishing stroma from epithelial tissues is critically important for spatial characterization of the tumor microenvironment. Here, we propose BrcaSeg, an image analysis pipeline based on a convolutional neural network (CNN) model to classify epithelial and stromal regions in whole-slide hematoxylin and eosin (H&E) stained histopathological images. The CNN model is trained using well-annotated breast cancer tissue microarrays and validated with images from The Cancer Genome Atlas (TCGA) Program. BrcaSeg achieves a classification accuracy of 91.02%, which outperforms other state-of-the-art methods. Using this model, we generate pixel-level epithelial/stromal tissue maps for 1000 TCGA breast cancer slide images that are paired with gene expression data. We subsequently estimate the epithelial and stromal ratios and perform correlation analysis to model the relationship between gene expression and tissue ratios. Gene Ontology (GO) enrichment analyses of genes that are highly correlated with tissue ratios suggest that the same tissue is associated with similar biological processes in different breast cancer subtypes, whereas each subtype also has its own idiosyncratic biological processes governing the development of these tissues. Taken all together, our approach can lead to new insights in exploring relationships between image-based phenotypes and their underlying genomic events and biological processes for all types of solid tumors. BrcaSeg can be accessed at https://github.com/Serian1992/ImgBio.