- Browse by Subject
Browsing by Subject "Biomedical imaging"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Demystifying the black box: A survey on explainable artificial intelligence (XAI) in bioinformatics(Elsevier, 2025-01-10) Budhkar, Aishwarya; Song, Qianqian; Su, Jing; Zhang, Xuhong; Biostatistics and Health Data Science, Richard M. Fairbanks School of Public HealthThe widespread adoption of Artificial Intelligence (AI) and machine learning (ML) tools across various domains has showcased their remarkable capabilities and performance. Black-box AI models raise concerns about decision transparency and user confidence. Therefore, explainable AI (XAI) and explainability techniques have rapidly emerged in recent years. This paper aims to review existing works on explainability techniques in bioinformatics, with a particular focus on omics and imaging. We seek to analyze the growing demand for XAI in bioinformatics, identify current XAI approaches, and highlight their limitations. Our survey emphasizes the specific needs of both bioinformatics applications and users when developing XAI methods and we particularly focus on omics and imaging data. Our analysis reveals a significant demand for XAI in bioinformatics, driven by the need for transparency and user confidence in decision-making processes. At the end of the survey, we provided practical guidelines for system developers.Item Disentangle, Align and Fuse for Multimodal and Semi-Supervised Image Segmentation(IEEE, 2021) Chartsias, Agisilaos; Papanastasiou, Giorgos; Wang, Chengjia; Semple, Scott; Newby, David E.; Dharmakumar, Rohan; Tsaftaris, Sotirios A.; Medicine, School of MedicineMagnetic resonance (MR) protocols rely on several sequences to assess pathology and organ status properly. Despite advances in image analysis, we tend to treat each sequence, here termed modality, in isolation. Taking advantage of the common information shared between modalities (an organ's anatomy) is beneficial for multi-modality processing and learning. However, we must overcome inherent anatomical misregistrations and disparities in signal intensity across the modalities to obtain this benefit. We present a method that offers improved segmentation accuracy of the modality of interest (over a single input model), by learning to leverage information present in other modalities, even if few (semi-supervised) or no (unsupervised) annotations are available for this specific modality. Core to our method is learning a disentangled decomposition into anatomical and imaging factors. Shared anatomical factors from the different inputs are jointly processed and fused to extract more accurate segmentation masks. Image misregistrations are corrected with a Spatial Transformer Network, which non-linearly aligns the anatomical factors. The imaging factor captures signal intensity characteristics across different modality data and is used for image reconstruction, enabling semi-supervised learning. Temporal and slice pairing between inputs are learned dynamically. We demonstrate applications in Late Gadolinium Enhanced (LGE) and Blood Oxygenation Level Dependent (BOLD) cardiac segmentation, as well as in T2 abdominal segmentation. Code is available at https://github.com/vios-s/multimodal_segmentation.Item Segmentation of biological images containing multitarget labeling using the jelly filling framework(SPIE, 2018-10) Gadgil, Neeraj J.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Medicine, School of MedicineBiomedical imaging when combined with digital image analysis is capable of quantitative morphological and physiological characterizations of biological structures. Recent fluorescence microscopy techniques can collect hundreds of focal plane images from deeper tissue volumes, thus enabling characterization of three-dimensional (3-D) biological structures at subcellular resolution. Automatic analysis methods are required to obtain quantitative, objective, and reproducible measurements of biological quantities. However, these images typically contain many artifacts such as poor edge details, nonuniform brightness, and distortions that vary along different axes, all of which complicate the automatic image analysis. Another challenge is due to "multitarget labeling," in which a single probe labels multiple biological entities in acquired images. We present a "jelly filling" method for segmentation of 3-D biological images containing multitarget labeling. Intuitively, our iterative segmentation method is based on filling disjoint tubule regions of an image with a jelly-like fluid. This helps in the detection of components that are "floating" within a labeled jelly. Experimental results show that our proposed method is effective in segmenting important biological quantities.