- Browse by Author
Browsing by Author "Delp, Edward J."
Now showing 1 - 10 of 19
Results Per Page
Sort Options
Item 3D Centroidnet: Nuclei Centroid Detection with Vector Flow Voting(IEEE, 2022-10) Wu, Liming; Chen, Alain; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyAutomated microscope systems are increasingly used to collect large-scale 3D image volumes of biological tissues. Since cell boundaries are seldom delineated in these images, detection of nuclei is a critical step for identifying and analyzing individual cells. Due to the large intra-class variability in nuclei morphology and the difficulty of generating ground truth annotations, accurate nuclei detection remains a challenging task. We propose a 3D nuclei centroid detection method by estimating the "vector flow" volume where each voxel represents a 3D vector pointing to its nearest nuclei centroid in the corresponding microscopy volume. We then use a voting mechanism to estimate the 3D nuclei centroids from the "vector flow" volume. Our system is trained on synthetic microscopy volumes and tested on real microscopy volumes. The evaluation results indicate our method outperforms other methods both visually and quantitatively.Item An Ensemble Learning and Slice Fusion Strategy for Three-Dimensional Nuclei Instance Segmentation(IEEE, 2022-06) Wu, Liming; Chen, Alain; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyAutomated microscopy image analysis is a fundamental step for digital pathology and computer aided diagnosis. Most existing deep learning methods typically require post-processing to achieve instance segmentation and are computationally expensive when directly used with 3D microscopy volumes. Supervised learning methods generally need large amounts of ground truth annotations for training whereas manually annotating ground truth masks is laborious especially for a 3D volume. To address these issues, we propose an ensemble learning and slice fusion strategy for 3D nuclei instance segmentation that we call Ensemble Mask R-CNN (EMR-CNN) which uses different object detectors to generate nuclei segmentation masks for each 2D slice of a volume and propose a 2D ensemble fusion and a 2D to 3D slice fusion to merge these 2D segmentation masks into a 3D segmentation mask. Our method does not need any ground truth annotations for training and can inference on any large size volumes. Our proposed method was tested on a variety of microscopy volumes collected from multiple regions of organ tissues. The execution time and robustness analyses show that our method is practical and effective.Item Boundary Segmentation For Fluorescence Microscopy Using Steerable Filters(SPIE, 2017) Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyFluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.Item Center-Extraction-Based Three Dimensional Nuclei Instance Segmentation of Fluorescence Microscopy Images(IEEE, 2019-05) Ho, David Joon; Han, Shuo; Fu, Chichen; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyFluorescence microscopy is an essential tool for the analysis of 3D subcellular structures in tissue. An important step in the characterization of tissue involves nuclei segmentation. In this paper, a two-stage method for segmentation of nuclei using convolutional neural networks (CNNs) is described. In particular, since creating labeled volumes manually for training purposes is not practical due to the size and complexity of the 3D data sets, the paper describes a method for generating synthetic microscopy volumes based on a spatially constrained cycle-consistent adversarial network. The proposed method is tested on multiple real microscopy data sets and outperforms other commonly used segmentation techniques.Item DeepSynth: Three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data(Nature Research, 2019-12-04) Dunn, Kenneth W.; Fu, Chichen; Ho, David Joon; Lee, Soonam; Han, Shuo; Salama, Paul; Delp, Edward J.; Medicine, School of MedicineThe scale of biological microscopy has increased dramatically over the past ten years, with the development of new modalities supporting collection of high-resolution fluorescence image volumes spanning hundreds of microns if not millimeters. The size and complexity of these volumes is such that quantitative analysis requires automated methods of image processing to identify and characterize individual cells. For many workflows, this process starts with segmentation of nuclei that, due to their ubiquity, ease-of-labeling and relatively simple structure, make them appealing targets for automated detection of individual cells. However, in the context of large, three-dimensional image volumes, nuclei present many challenges to automated segmentation, such that conventional approaches are seldom effective and/or robust. Techniques based upon deep-learning have shown great promise, but enthusiasm for applying these techniques is tempered by the need to generate training data, an arduous task, particularly in three dimensions. Here we present results of a new technique of nuclear segmentation using neural networks trained on synthetic data. Comparisons with results obtained using commonly-used image processing packages demonstrate that DeepSynth provides the superior results associated with deep-learning techniques without the need for manual annotation.Item Dietary Intervention for Glucose Tolerance In Teens (DIG IT): Protocol of a randomized controlled trial using health coaching to prevent youth-onset type 2 diabetes(Elsevier, 2017-02) Wagner, Kelly A.; Braun, Ethan; Armah, Seth M.; Horan, Diarmuid; Smith, Lisa G.; Pike, Julie; Tu, Wanzhu; Hamilton, Marc T.; Delp, Edward J.; Campbell, Wayne W.; Boushey, Carol J.; Hannon, Tamara S.; Gletsu-Miller, Nana; Pediatrics, School of MedicineBACKGROUND: Youth-onset type 2 diabetes (T2D) is a disease that is newly emerging and behavioral strategies for its prevention are limited. Interventions that target the lifestyle behaviors of adolescents, to improve poor dietary quality and reduce excessive sedentariness, promise to reduce the risk of developing T2D. Health coaching is effective for promoting healthy behaviors in patients who have chronic disease, but few experimental studies are in adolescents. This randomized controlled trial, in adolescents with prediabetes, will determine the effectiveness of a health coaching intervention to facilitate adoption of healthy diet and activity behaviors that delay or prevent development of T2D. METHODS/DESIGN: The Dietary Intervention for Glucose Tolerance In Teens (DIG IT) trial will involve an evaluation of a health coaching intervention in adolescents with prediabetes. Eligible participants will be randomized to receive 6months of health coaching or a single dietary consultation that is standard-of-care. The primary outcome will be 2-hour oral glucose tolerance test concentration. Secondary outcomes will include measures of glycemia and insulin action as well as dietary, physical activity and sedentary behaviors measured using an electronic food record, and by inclinometer. Data will be collected before and after the intervention (at 6months) and at 12months (to assess sustainability). DISCUSSION: This trial will determine whether a health coaching intervention, a personalized and low-cost approach to modify dietary and activity behaviors, is effective and sustainable for prevention of youth-onset T2D, relative to standard-of-care. Health coaching has the potential to be widely implemented in clinical or community settings.Item DINAVID: A Distributed and Networked Image Analysis System for Volumetric Image Data(Cold Spring Harbor Laboratory, 2022-05-11) Han, Shuo; Chen , Alain; Lee, Soonam; Fu, Chichen; Yang, Changye; Wu, Liming; Winfree, Seth; El-Achkar, Tarek M.; Dunn, Kenneth W.; Salama, Paul; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyBackground: The advancement of high content optical microscopy has enabled the acquisition of very large 3D image datasets. Image analysis tools and three dimensional visualization are critical for analyzing and interpreting 3D image volumes. The analysis of these volumes require more computational resources than a biologist may have access to in typical desktop or laptop computers. This is especially true if machine learning tools are being used for image analysis. With the increased amount of data analysis and computational complexity, there is a need for a more accessible, easy-to-use, and efficient network-based/cloud-based 3D image processing system. Results: The Distributed and Networked Analysis of Volumetric Image Data (DINAVID) system was developed to enable remote analysis of 3D microscopy images for biologists. DINAVID is a server/cloud-based system with a simple web interface that allows biologists to upload 3D volumes for analysis and visualization. DINAVID is designed using open source tools and has two main sub-systems, a computational system for 3D microscopy image processing and analysis as well as a 3D visualization system. Conclusions: In this paper, we will present an overview of the DINAVID system and compare it to other tools currently available for microscopy image analysis.Item An Extreme Learning Machine-based Pedestrian Detection Method(Office of the Vice Chancellor for Research, 2013-04-05) Yang, Kai; Du, Eliza Y.; Delp, Edward J.; Jiang, Pingge; Jiang, Feng; Chen, Yaobin; Sherony, Rini; Takahashi, HiroyukiPedestrian detection is a challenging task due to the high variance of pedestrians and fast changing background, especially for a single in-car camera system. Traditional HOG+SVM methods have two challenges: (1) false positives and (2) processing speed. In this paper, a new pedestrian detection method using multimodal HOG for pedestrian feature extraction and kernel based Extreme Learning Machine (ELM) for classification is presented. The experimental results using our naturalistic driving dataset show that the proposed method outperforms the traditional HOG+SVM method in both recognition accuracy and processing speed.Item Four Dimensional Image Registration For Intravital Microscopy(IEEE, 2016-06) Fu, Chichen; Gadgil, Neeraj; Tahboub, Khalid K.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.; Department of Biochemistry and Molecular Biology, School of MedicineIncreasingly the behavior of living systems is being evaluated using intravital microscopy since it provides subcellular resolution of biological processes in an intact living organism. Intravital microscopy images are frequently confounded by motion resulting from animal respiration and heartbeat. In this paper we describe an image registration method capable of correcting motion artifacts in three dimensional fluorescence microscopy images collected over time. Our method uses 3D B-Spline non-rigid registration using a coarse-to-fine strategy to register stacks of images collected at different time intervals and 4D rigid registration to register 3D volumes over time. The results show that our proposed method has the ability of correcting global motion artifacts of sample tissues in four dimensional space, thereby revealing the motility of individual cells in the tissue.Item NISNet3D: three-dimensional nuclear synthesis and instance segmentation for fluorescence microscopy images(Springer Nature, 2023-06-12) Wu, Liming; Chen, Alain; Salama, Paul; Winfree, Seth; Dunn, Kenneth W.; Delp, Edward J.; Electrical and Computer Engineering, School of Engineering and TechnologyThe primary step in tissue cytometry is the automated distinction of individual cells (segmentation). Since cell borders are seldom labeled, cells are generally segmented by their nuclei. While tools have been developed for segmenting nuclei in two dimensions, segmentation of nuclei in three-dimensional volumes remains a challenging task. The lack of effective methods for three-dimensional segmentation represents a bottleneck in the realization of the potential of tissue cytometry, particularly as methods of tissue clearing present the opportunity to characterize entire organs. Methods based on deep learning have shown enormous promise, but their implementation is hampered by the need for large amounts of manually annotated training data. In this paper, we describe 3D Nuclei Instance Segmentation Network (NISNet3D) that directly segments 3D volumes through the use of a modified 3D U-Net, 3D marker-controlled watershed transform, and a nuclei instance segmentation system for separating touching nuclei. NISNet3D is unique in that it provides accurate segmentation of even challenging image volumes using a network trained on large amounts of synthetic nuclei derived from relatively few annotated volumes, or on synthetic data obtained without annotated volumes. We present a quantitative comparison of results obtained from NISNet3D with results obtained from a variety of existing nuclei segmentation techniques. We also examine the performance of the methods when no ground truth is available and only synthetic volumes were used for training.