- Browse by Subject
Browsing by Subject "Image segmentation"
Now showing 1 - 10 of 12
Results Per Page
Sort Options
Item Active geometric model : multi-compartment model-based segmentation & registration(2014-08-26) Mukherjee, Prateep; Tsechpenakis, Gavriil; Raje, Rajeev; Tuceryan, MihranWe present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.Item AI in Medical Imaging Informatics: Current Challenges and Future Directions(IEEE, 2020-07) Panayides, Andreas S.; Amini, Amir; Filipovic, Nenad D.; Sharma, Ashish; Tsaftaris, Sotirios A.; Young, Alistair; Foran, David; Do, Nhan; Golemati, Spyretta; Kurc, Tahsin; Huang, Kun; Nikita, Konstantina S.; Veasey, Ben P.; Zervakis, Michalis; Saltz, Joel H.; Pattichis, Constantinos S.; Biostatistics & Health Data Science, School of MedicineThis paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.Item Convolutional neural network denoising in fluorescence lifetime imaging microscopy (FLIM)(SPIE, 2021) Mannam, Varun; Zhang, Yide; Yuan, Xiaotong; Hato, Takashi; Dagher, Pierre C.; Nichols, Evan L.; Smith, Cody J.; Dunn, Kenneth W.; Howard, Scott; Medicine, School of MedicineFluorescence lifetime imaging microscopy (FLIM) systems are limited by their slow processing speed, low signal- to-noise ratio (SNR), and expensive and challenging hardware setups. In this work, we demonstrate applying a denoising convolutional network to improve FLIM SNR. The network will integrated with an instant FLIM system with fast data acquisition based on analog signal processing, high SNR using high-efficiency pulse-modulation, and cost-effective implementation utilizing off-the-shelf radio-frequency components. Our instant FLIM system simultaneously provides the intensity, lifetime, and phasor plots in vivo and ex vivo. By integrating image de- noising using the trained deep learning model on the FLIM data, provide accurate FLIM phasor measurements are obtained. The enhanced phasor is then passed through the K-means clustering segmentation method, an unbiased and unsupervised machine learning technique to separate different fluorophores accurately. Our experimental in vivo mouse kidney results indicate that introducing the deep learning image denoising model before the segmentation effectively removes the noise in the phasor compared to existing methods and provides clearer segments. Hence, the proposed deep learning-based workflow provides fast and accurate automatic segmentation of fluorescence images using instant FLIM. The denoising operation is effective for the segmentation if the FLIM measurements are noisy. The clustering can effectively enhance the detection of biological structures of interest in biomedical imaging applications.Item Disentangle, Align and Fuse for Multimodal and Semi-Supervised Image Segmentation(IEEE, 2021) Chartsias, Agisilaos; Papanastasiou, Giorgos; Wang, Chengjia; Semple, Scott; Newby, David E.; Dharmakumar, Rohan; Tsaftaris, Sotirios A.; Medicine, School of MedicineMagnetic resonance (MR) protocols rely on several sequences to assess pathology and organ status properly. Despite advances in image analysis, we tend to treat each sequence, here termed modality, in isolation. Taking advantage of the common information shared between modalities (an organ's anatomy) is beneficial for multi-modality processing and learning. However, we must overcome inherent anatomical misregistrations and disparities in signal intensity across the modalities to obtain this benefit. We present a method that offers improved segmentation accuracy of the modality of interest (over a single input model), by learning to leverage information present in other modalities, even if few (semi-supervised) or no (unsupervised) annotations are available for this specific modality. Core to our method is learning a disentangled decomposition into anatomical and imaging factors. Shared anatomical factors from the different inputs are jointly processed and fused to extract more accurate segmentation masks. Image misregistrations are corrected with a Spatial Transformer Network, which non-linearly aligns the anatomical factors. The imaging factor captures signal intensity characteristics across different modality data and is used for image reconstruction, enabling semi-supervised learning. Temporal and slice pairing between inputs are learned dynamically. We demonstrate applications in Late Gadolinium Enhanced (LGE) and Blood Oxygenation Level Dependent (BOLD) cardiac segmentation, as well as in T2 abdominal segmentation. Code is available at https://github.com/vios-s/multimodal_segmentation.Item Image Segmentation of Operative Neuroanatomy Into Tissue Categories Using a Machine Learning Construct and Its Role in Neurosurgical Training(Wolters Kluwer, 2022-10) Witten , Andrew J.; Patel , Neal; Cohen-Gadol, Aaron; Neurological Surgery, School of MedicineBACKGROUND: The complexity of the relationships among the structures within the brain makes efficient mastery of neuroanatomy difficult for medical students and neurosurgical residents. Therefore, there is a need to provide real-time segmentation of neuroanatomic images taken from various perspectives to assist with training. OBJECTIVE: To develop the initial foundation of a neuroanatomic image segmentation algorithm using artificial intelligence for education. METHODS: A pyramidal scene-parsing network with a convolutional residual neural network backbone was assessed for its ability to accurately segment neuroanatomy images. A data set of 879 images derived from The Neurosurgical Atlas was used to train, validate, and test the network. Quantitative assessment of the segmentation was performed using pixel accuracy, intersection-over-union, the Dice similarity coefficient, precision, recall, and the boundary F1 score. RESULTS: The network was trained, and performance was assessed class wise. Compared with the ground truth annotations, the ensembled results for our artificial intelligence framework for the pyramidal scene-parsing network during testing generated a total pixel accuracy of 91.8%. CONCLUSION: Using the presented methods, we show that a convolutional neural network can accurately segment gross neuroanatomy images, which represents an initial foundation in artificial intelligence gross neuroanatomy that will aid future neurosurgical training. These results also suggest that our network is sufficiently robust, to an unprecedented level, for performing anatomic category recognition in a clinical setting.Item Image Segmentation of Operative Neuroanatomy Into Tissue Categories Using a Machine Learning Construct and Its Role in Neurosurgical Training(Wolters Kluwer, 2022) Witten, Andrew J.; Patel, Neal; Cohen-Gadol, Aaron; Neurological Surgery, School of MedicineBackground: The complexity of the relationships among the structures within the brain makes efficient mastery of neuroanatomy difficult for medical students and neurosurgical residents. Therefore, there is a need to provide real-time segmentation of neuroanatomic images taken from various perspectives to assist with training. Objective: To develop the initial foundation of a neuroanatomic image segmentation algorithm using artificial intelligence for education. Methods: A pyramidal scene-parsing network with a convolutional residual neural network backbone was assessed for its ability to accurately segment neuroanatomy images. A data set of 879 images derived from The Neurosurgical Atlas was used to train, validate, and test the network. Quantitative assessment of the segmentation was performed using pixel accuracy, intersection-over-union, the Dice similarity coefficient, precision, recall, and the boundary F1 score. Results: The network was trained, and performance was assessed class wise. Compared with the ground truth annotations, the ensembled results for our artificial intelligence framework for the pyramidal scene-parsing network during testing generated a total pixel accuracy of 91.8%. Conclusion: Using the presented methods, we show that a convolutional neural network can accurately segment gross neuroanatomy images, which represents an initial foundation in artificial intelligence gross neuroanatomy that will aid future neurosurgical training. These results also suggest that our network is sufficiently robust, to an unprecedented level, for performing anatomic category recognition in a clinical setting.Item Improved Robustness for Deep Learning-based Segmentation of Multi-Center Myocardial Perfusion MRI Datasets Using Data Adaptive Uncertainty-guided Space-time Analysis(ArXiv, 2024-08-09) Yalcinkaya, Dilek M.; Youssef, Khalid; Heydari, Bobak; Wei, Janet; Merz, Noel Bairey; Judd, Robert; Dharmakumar, Rohan; Simonetti, Orlando P.; Weinsaft, Jonathan W.; Raman, Subha V.; Sharif, Behzad; Medicine, School of MedicineBackground: Fully automatic analysis of myocardial perfusion MRI datasets enables rapid and objective reporting of stress/rest studies in patients with suspected ischemic heart disease. Developing deep learning techniques that can analyze multi-center datasets despite limited training data and variations in software (pulse sequence) and hardware (scanner vendor) is an ongoing challenge. Methods: Datasets from 3 medical centers acquired at 3T (n = 150 subjects; 21,150 first-pass images) were included: an internal dataset (inD; n = 95) and two external datasets (exDs; n = 55) used for evaluating the robustness of the trained deep neural network (DNN) models against differences in pulse sequence (exD-1) and scanner vendor (exD-2). A subset of inD (n = 85) was used for training/validation of a pool of DNNs for segmentation, all using the same spatiotemporal U-Net architecture and hyperparameters but with different parameter initializations. We employed a space-time sliding-patch analysis approach that automatically yields a pixel-wise "uncertainty map" as a byproduct of the segmentation process. In our approach, dubbed Data Adaptive Uncertainty-Guided Space-time (DAUGS) analysis, a given test case is segmented by all members of the DNN pool and the resulting uncertainty maps are leveraged to automatically select the "best" one among the pool of solutions. For comparison, we also trained a DNN using the established approach with the same settings (hyperparameters, data augmentation, etc.). Results: The proposed DAUGS analysis approach performed similarly to the established approach on the internal dataset (Dice score for the testing subset of inD: 0.896 ± 0.050 vs. 0.890 ± 0.049; p = n.s.) whereas it significantly outperformed on the external datasets (Dice for exD-1: 0.885 ± 0.040 vs. 0.849 ± 0.065, p < 0.005; Dice for exD-2: 0.811 ± 0.070 vs. 0.728 ± 0.149, p < 0.005). Moreover, the number of image series with "failed" segmentation (defined as having myocardial contours that include bloodpool or are noncontiguous in ≥1 segment) was significantly lower for the proposed vs. the established approach (4.3% vs. 17.1%, p < 0.0005). Conclusions: The proposed DAUGS analysis approach has the potential to improve the robustness of deep learning methods for segmentation of multi-center stress perfusion datasets with variations in the choice of pulse sequence, site location or scanner vendor.Item Optical tissue clearing enables rapid, precise and comprehensive assessment of three-dimensional morphology in experimental nerve regeneration research(Wolters Kluwer, 2022) Daeschler, Simeon C.; Zhang, Jennifer; Gordon, Tessa; Borschel, Gregory H.; Surgery, School of MedicineMorphological analyses are key outcome assessments for nerve regeneration studies but are historically limited to tissue sections. Novel optical tissue clearing techniques enabling three-dimensional imaging of entire organs at a subcellular resolution have revolutionized morphological studies of the brain. To extend their applicability to experimental nerve repair studies we adapted these techniques to nerves and their motor and sensory targets in rats. The solvent-based protocols rendered harvested peripheral nerves and their target organs transparent within 24 hours while preserving tissue architecture and fluorescence. The optical clearing was compatible with conventional laboratory techniques, including retrograde labeling studies, and computational image segmentation, providing fast and precise cell quantitation. Further, optically cleared organs enabled three-dimensional morphometry at an unprecedented scale including dermatome-wide innervation studies, tracing of intramuscular nerve branches or mapping of neurovascular networks. Given their wide-ranging applicability, rapid processing times, and low costs, tissue clearing techniques are likely to be a key technology for next-generation nerve repair studies. All procedures were approved by the Hospital for Sick Children's Laboratory Animal Services Committee (49871/9) on November 9, 2019.Item RCNN-SliceNet: A Slice and Cluster Approach for Nuclei Centroid Detection in Three-Dimensional Fluorescence Microscopy Images(IEEE, 2021) Wu, Liming; Han, Shuo; Chen, Alain; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyRobust and accurate nuclei centroid detection is important for the understanding of biological structures in fluorescence microscopy images. Existing automated nuclei localization methods face three main challenges: (1) Most of object detection methods work only on 2D images and are difficult to extend to 3D volumes; (2) Segmentation-based models can be used on 3D volumes but it is computational expensive for large microscopy volumes and they have difficulty distinguishing different instances of objects; (3) Hand annotated ground truth is limited for 3D microscopy volumes. To address these issues, we present a scalable approach for nuclei centroid detection of 3D microscopy volumes. We describe the RCNN-SliceNet to detect 2D nuclei centroids for each slice of the volume from different directions and 3D agglomerative hierarchical clustering (AHC) is used to estimate the 3D centroids of nuclei in a volume. The model was trained with the synthetic microscopy data generated using Spatially Constrained Cycle-Consistent Adversarial Networks (SpCycle-GAN) and tested on different types of real 3D microscopy data. Extensive experimental results demonstrate that our proposed method can accurately count and detect the nuclei centroids in a 3D microscopy volume.Item Segmentation of biological images containing multitarget labeling using the jelly filling framework(SPIE, 2018-10) Gadgil, Neeraj J.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Medicine, School of MedicineBiomedical imaging when combined with digital image analysis is capable of quantitative morphological and physiological characterizations of biological structures. Recent fluorescence microscopy techniques can collect hundreds of focal plane images from deeper tissue volumes, thus enabling characterization of three-dimensional (3-D) biological structures at subcellular resolution. Automatic analysis methods are required to obtain quantitative, objective, and reproducible measurements of biological quantities. However, these images typically contain many artifacts such as poor edge details, nonuniform brightness, and distortions that vary along different axes, all of which complicate the automatic image analysis. Another challenge is due to "multitarget labeling," in which a single probe labels multiple biological entities in acquired images. We present a "jelly filling" method for segmentation of 3-D biological images containing multitarget labeling. Intuitively, our iterative segmentation method is based on filling disjoint tubule regions of an image with a jelly-like fluid. This helps in the detection of components that are "floating" within a labeled jelly. Experimental results show that our proposed method is effective in segmenting important biological quantities.