ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Interpretability"

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    PINet: Privileged Information Improve the Interpretablity and generalization of structural MRI in Alzheimer’s Disease
    (Association for Computing Machinery, 2023) Tang, Zijia; Zhang, Tonglin; Song, Qianqian; Su, Jing; Yang, Baijian; Biostatistics and Health Data Science, Richard M. Fairbanks School of Public Health
    The irreversible and progressive atrophy by Alzheimer’s Disease resulted in continuous decline in thinking and behavioral skills. To date, CNN classifiers were widely applied to assist the early diagnosis of AD and its associated abnormal structures. However, most existing black-box CNN classifiers relied heavily on the limited MRI scans, and used little domain knowledge from the previous clinical findings. In this study, we proposed a framework, named as PINet, to consider the previous domain knowledge as a Privileged Information (PI), and open the black-box in the prediction process. The input domain knowledge guides the neural network to learn representative features and introduced intepretability for further analysis. PINet used a Transformer-like fusion module Privileged Information Fusion (PIF) to iteratively calculate the correlation of the features between image features and PI features, and project the features into a latent space for classification. The Pyramid Feature Visualization (PFV) module served as a verification to highlight the significant features on the input images. PINet was suitable for neuro-imaging tasks and we demonstrated its application in Alzheimer’s Disease using structural MRI scans from ADNI dataset. During the experiments, we employed the abnormal brain structures such as the Hippocampus as the PI, trained the model with the data from 1.5T scanners and tested from 3T scanners. The F1-score showed that PINet was more robust in transferring to a new dataset, with approximatedly 2% drop (from 0.9471 to 0.9231), while the baseline CNN methods had a 29% drop (from 0.8679 to 0.6154). The performance of PINet was relied on the selection of the domain knowledge as the PI. Our best model was trained under the guidance of 12 selected ROIs, major in the structures of Temporal Lobe and Occipital Lobe. In summary, PINet considered the domain knowledge as the PI to train the CNN model, and the selected PI introduced both interpretability and generalization ability to the black box CNN classifiers.
  • Loading...
    Thumbnail Image
    Item
    Stress testing deep learning models for prostate cancer detection on biopsies and surgical specimens
    (Wiley, 2025) Flannery, Brennan T.; Sandler, Howard M.; Lal, Priti; Feldman, Michael D.; Santa-Rosario, Juan C.; Pathak, Tilak; Mirtti, Tuomas; Farre, Xavier; Correa, Rohann; Chafe, Susan; Shah, Amit; Efstathiou, Jason A.; Hoffman, Karen; Hallman, Mark A.; Straza, Michael; Jordan, Richard; Pugh, Stephanie L.; Feng, Felix; Madabhushi, Anant; Pathology and Laboratory Medicine, School of Medicine
    The presence, location, and extent of prostate cancer is assessed by pathologists using H&E-stained tissue slides. Machine learning approaches can accomplish these tasks for both biopsies and radical prostatectomies. Deep learning approaches using convolutional neural networks (CNNs) have been shown to identify cancer in pathologic slides, some securing regulatory approval for clinical use. However, differences in sample processing can subtly alter the morphology between sample types, making it unclear whether deep learning algorithms will consistently work on both types of slide images. Our goal was to investigate whether morphological differences between sample types affected the performance of biopsy-trained cancer detection CNN models when applied to radical prostatectomies and vice versa using multiple cohorts (N = 1,000). Radical prostatectomies (N = 100) and biopsies (N = 50) were acquired from The University of Pennsylvania to train (80%) and validate (20%) a DenseNet CNN for biopsies (MB), radical prostatectomies (MR), and a combined dataset (MB+R). On a tile level, MB and MR achieved F1 scores greater than 0.88 when applied to their own sample type but less than 0.65 when applied across sample types. On a whole-slide level, models achieved significantly better performance on their own sample type compared to the alternative model (p < 0.05) for all metrics. This was confirmed by external validation using digitized biopsy slide images from a clinical trial [NRG Radiation Therapy Oncology Group (RTOG)] (NRG/RTOG 0521, N = 750) via both qualitative and quantitative analyses (p < 0.05). A comprehensive review of model outputs revealed morphologically driven decision making that adversely affected model performance. MB appeared to be challenged with the analysis of open gland structures, whereas MR appeared to be challenged with closed gland structures, indicating potential morphological variation between the training sets. These findings suggest that differences in morphology and heterogeneity necessitate the need for more tailored, sample-specific (i.e. biopsy and surgical) machine learning models.
  • Loading...
    Thumbnail Image
    Item
    Triage of High-Risk Cancer Patients Through Imaging, Genetic, and Integrative Approaches
    (2024-03) Couetil, Justin Louis; Huang, Kun; Zhang, Jie; Zhang, Chi; Alomari, Ahmed
    Metastasis, the spread of cancer cells from their original site to other body parts, is responsible for 90% of cancer mortality. This work applies machine learning and bioinformatic approaches on histopathological images and transcriptomic data of primary tumors to identify cancer early-stage melanoma and prostate cancer patients at high-risk for metastasis. In melanoma, we analyze digitized histopathological images of tumor biopsies to predict metastasis risk and survival. This is a common task in computational pathology, but many methods rely on “black box” approaches, such as deep learning, which are not directly interpretable. This is a barrier to adoption to pathologists, who need to understand how a tumors specific morphology is associated with prognosis. To overcome this, we develop human-interpretable features that measure the shape and arrangement of cells and nuclei, tissue texture. Our models provide prognostic power, recapitulate existing knowledge, and provide new insights into understanding metastatic events in early-stage primary tumors. For prostate cancer, we use our deep transfer learning framework, DEGAS, which combines single cell, spatial and bulk tissue transcriptomic data to identify regions of tissue in spatial transcriptomics that are highly associated with prostate cancer spread. DEGAS repeatedly identifies glands that appear histologically normal but share gene expression patterns with high-grade cancers. These results highlight the “Field Effect”, which suggest environmental and genetic factors can cause widespread genetic and epigenetic changes in tissue, a known phenomenon in pathology, but identified in high resolution transcriptomics for the first time in this work. Taken together, the work in melanoma and prostate cancer bridges the gap between traditional pathology and modern disease prognosis models. By constructing the tools to identify high risk patients and tissue, we aim to enhance metastasis research and improve clinical care for at-risk patients.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University