- Browse by Author
Browsing by Author "Fu, Chichen"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Center-Extraction-Based Three Dimensional Nuclei Instance Segmentation of Fluorescence Microscopy Images(IEEE, 2019-05) Ho, David Joon; Han, Shuo; Fu, Chichen; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyFluorescence microscopy is an essential tool for the analysis of 3D subcellular structures in tissue. An important step in the characterization of tissue involves nuclei segmentation. In this paper, a two-stage method for segmentation of nuclei using convolutional neural networks (CNNs) is described. In particular, since creating labeled volumes manually for training purposes is not practical due to the size and complexity of the 3D data sets, the paper describes a method for generating synthetic microscopy volumes based on a spatially constrained cycle-consistent adversarial network. The proposed method is tested on multiple real microscopy data sets and outperforms other commonly used segmentation techniques.Item DeepSynth: Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data(Nature Research, 2019-12-04) Dunn, Kenneth W.; Fu, Chichen; Ho, David Joon; Lee, Soonam; Han, Shuo; Salama, Paul; Delp, Edward J.; Medicine, School of MedicineThe scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.Item DINAVID: A Distributed and Networked Image Analysis System for Volumetric Image Data(Cold Spring Harbor Laboratory, 2022-05-11) Han, Shuo; Chen , Alain; Lee, Soonam; Fu, Chichen; Yang, Changye; Wu, Liming; Winfree, Seth; El-Achkar, Tarek M.; Dunn, Kenneth W.; Salama, Paul; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyBackground: The advancement of high content optical microscopy has enabled the acquisition of very large 3D image datasets. Image analysis tools and three dimensional visualization are critical for analyzing and interpreting 3D image volumes. The analysis of these volumes require more computational resources than a biologist may have access to in typical desktop or laptop computers. This is especially true if machine learning tools are being used for image analysis. With the increased amount of data analysis and computational complexity, there is a need for a more accessible, easy-to-use, and efficient network-based/cloud-based 3D image processing system. Results: The Distributed and Networked Analysis of Volumetric Image Data (DINAVID) system was developed to enable remote analysis of 3D microscopy images for biologists. DINAVID is a server/cloud-based system with a simple web interface that allows biologists to upload 3D volumes for analysis and visualization. DINAVID is designed using open source tools and has two main sub-systems, a computational system for 3D microscopy image processing and analysis as well as a 3D visualization system. Conclusions: In this paper, we will present an overview of the DINAVID system and compare it to other tools currently available for microscopy image analysis.Item Four Dimensional Image Registration For Intravital Microscopy(IEEE, 2016-06) Fu, Chichen; Gadgil, Neeraj; Tahboub, Khalid K.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Department of Biochemistry and Molecular Biology, School of MedicineIncreasingly the behavior of living systems is being evaluated using intravital microscopy since it provides subcellular resolution of biological processes in an intact living organism. Intravital microscopy images are frequently confounded by motion resulting from animal respiration and heartbeat. In this paper we describe an image registration method capable of correcting motion artifacts in three dimensional fluorescence microscopy images collected over time. Our method uses 3D B-Spline non-rigid registration using a coarse-to-fine strategy to register stacks of images collected at different time intervals and 4D rigid registration to register 3D volumes over time. The results show that our proposed method has the ability of correcting global motion artifacts of sample tissues in four dimensional space, thereby revealing the motility of individual cells in the tissue.Item Nuclei counting in microscopy images with three dimensional generative adversarial networks(SPIE, 2019-03) Han, Shuo; Lee, Soonam; Fu, Chichen; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyMicroscopy image analysis can provide substantial information for clinical study and understanding of biological structures. Two-photon microscopy is a type of fluorescence microscopy that can image deep into tissue with near-infrared excitation light. We are interested in methods that can detect and characterize nuclei in 3D fluorescence microscopy image volumes. In general, several challenges exist for counting nuclei in 3D image volumes. These include “crowding” and touching of nuclei, overlapping of nuclei, and shape and size variances of the nuclei. In this paper, a 3D nuclei counter using two different generative adversarial networks (GAN) is proposed and evaluated. Synthetic data that resembles real microscopy image is generated with a GAN and used to train another 3D GAN that counts the number of nuclei. Our approach is evaluated with respect to the number of groundtruth nuclei and compared with common ways of counting used in the biological research. Fluorescence microscopy 3D image volumes of rat kidneys are used to test our 3D nuclei counter. The accuracy results of proposed nuclei counter are compared with the ImageJ’s 3D object counter (JACoP) and the 3D watershed. Both the counting accuracy and the object-based evaluation show that the proposed technique is successful for counting nuclei in 3D.Item Nuclei Segmentation of Fluorescence Microscopy Images Using Three Dimensional Convolutional Neural Networks(IEEE, 2017-07) Ho, David Joon; Fu, Chichen; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyFluorescence microscopy enables one to visualize subcellular structures of living tissue or cells in three dimensions. This is especially true for two-photon microscopy using near-infrared light which can image deeper into tissue. To characterize and analyze biological structures, nuclei segmentation is a prerequisite step. Due to the complexity and size of the image data sets, manual segmentation is prohibitive. This paper describes a fully 3D nuclei segmentation method using three dimensional convolutional neural networks. To train the network, synthetic volumes with corresponding labeled volumes are automatically generated. Our results from multiple data sets demonstrate that our method can successfully segment nuclei in 3D.Item Three Dimensional Fluorescence Microscopy Image Synthesis and Segmentation(IEEE, 2018-06) Fu, Chichen; Lee, Soonam; Ho, David Joon; Han, Shuo; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyAdvances in fluorescence microscopy enable acquisition of 3D image volumes with better image quality and deeper penetration into tissue. Segmentation is a required step to characterize and analyze biological structures in the images and recent 3D segmentation using deep learning has achieved promising results. One issue is that deep learning techniques require a large set of groundtruth data which is impractical to annotate manually for large 3D microscopy volumes. This paper describes a 3D deep learning nuclei segmentation method using synthetic 3D volumes for training. A set of synthetic volumes and the corresponding groundtruth are generated using spatially constrained cycle-consistent adversarial networks. Segmentation results demonstrate that our proposed method is capable of segmenting nuclei successfully for various data sets.Item Tubule Segmentation of Fluorescence Microscopy Images Based on Convolutional Neural Networks With Inhomogeneity Correction(Society for Imaging Science and Technology, 2018) Lee, Soonam; Fu, Chichen; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyFluorescence microscopy has become a widely used tool for studying various biological structures of in vivo tissue or cells. However, quantitative analysis of these biological structures remains a challenge due to their complexity which is exacerbated by distortions caused by lens aberrations and light scattering. Moreover, manual quantification of such image volumes is an intractable and error-prone process, making the need for automated image analysis methods crucial. This paper describes a segmentation method for tubular structures in fluorescence microscopy images using convolutional neural networks with data augmentation and inhomogeneity correction. The segmentation results of the proposed method are visually and numerically compared with other microscopy segmentation methods. Experimental results indicate that the proposed method has better performance with correctly segmenting and identifying multiple tubular structures compared to other methods.