- Browse by Subject
Browsing by Subject "Image analysis"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Active geometric model : multi-compartment model-based segmentation & registration(2014-08-26) Mukherjee, Prateep; Tsechpenakis, Gavriil; Raje, Rajeev; Tuceryan, MihranWe present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.Item AI in Medical Imaging Informatics: Current Challenges and Future Directions(IEEE, 2020-07) Panayides, Andreas S.; Amini, Amir; Filipovic, Nenad D.; Sharma, Ashish; Tsaftaris, Sotirios A.; Young, Alistair; Foran, David; Do, Nhan; Golemati, Spyretta; Kurc, Tahsin; Huang, Kun; Nikita, Konstantina S.; Veasey, Ben P.; Zervakis, Michalis; Saltz, Joel H.; Pattichis, Constantinos S.; Biostatistics & Health Data Science, School of MedicineThis paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.Item An improved method for murine laser-induced choroidal neovascularization lesion quantification from optical coherence tomography images(Elsevier, 2022-08-02) Jensen, Nathan R.; Lambert-Cheatham, Nathan; Hartman, Gabriella D.; Muniyandi, Anbukkarasi; Park, Bomina; Sishtla, Kamakshi; Corson, Timothy W.; Ophthalmology, School of MedicineLaser-induced choroidal neovascularization (L-CNV) in murine models is a standard method for assessing therapies, genetics, and mechanisms relevant to the blinding eye disease neovascular or "wet" age-related macular degeneration. The ex vivo evaluation of these lesions involves confocal microscopy analysis. In vivo evaluation via optical coherence tomography (OCT) has previously been established and allows longitudinal assessment of lesion development. However, to produce robust data, evaluation of many lesions may be required, which can be a slow, arduous process. A prior, manual method for quantifying these lesions as ellipsoids from orthogonal OCT images was effective but time consuming. We therefore developed an OCT lesion quantification that is simplified, streamlined, and less time-consuming.Item Automated image classification via unsupervised feature learning by K-means(2015-07-09) Karimy Dehkordy, Hossein; Dundar, Mehmet Murat; Song, Fengguang; Xia, YuniResearch on image classification has grown rapidly in the field of machine learning. Many methods have already been implemented for image classification. Among all these methods, best results have been reported by neural network-based techniques. One of the most important steps in automated image classification is feature extraction. Feature extraction includes two parts: feature construction and feature selection. Many methods for feature extraction exist, but the best ones are related to deep-learning approaches such as network-in-network or deep convolutional network algorithms. Deep learning tries to focus on the level of abstraction and find higher levels of abstraction from the previous level by having multiple layers of hidden layers. The two main problems with using deep-learning approaches are the speed and the number of parameters that should be configured. Small changes or poor selection of parameters can alter the results completely or even make them worse. Tuning these parameters is usually impossible for normal users who do not have super computers because one should run the algorithm and try to tune the parameters according to the results obtained. Thus, this process can be very time consuming. This thesis attempts to address the speed and configuration issues found with traditional deep-network approaches. Some of the traditional methods of unsupervised learning are used to build an automated image-classification approach that takes less time both to configure and to run.Item Digital Image Analysis Tools Developed by the Indiana O’Brien Center(Frontiers Media, 2021-12-16) Dunn, Kenneth W.; Medicine, School of MedicineThe scale and complexity of images collected in biological microscopy have grown enormously over the past 30 years. The development and commercialization of multiphoton microscopy has promoted a renaissance of intravital microscopy, providing a window into cell biology in vivo. New methods of optical sectioning and tissue clearing now enable biologists to characterize entire organs at subcellular resolution. New methods of multiplexed imaging support simultaneous localization of forty or more probes at a time. Exploiting these exciting new techniques has increasingly required biomedical researchers to master procedures of image analysis that were once the specialized province of imaging experts. A primary goal of the Indiana O'Brien Center has been to develop robust and accessible image analysis tools for biomedical researchers. Here we describe biomedical image analysis software developed by the Indiana O'Brien Center over the past 25 years.Item Editorial: Proceedings of the 2021 Indiana O'Brien Center Microscopy Workshop(Frontiers Media, 2022-05-02) Dunn, Kenneth W.; Hall, Andrew M.; Molitoris, Bruce A.; Medicine, School of MedicineItem Improved Robustness for Deep Learning-based Segmentation of Multi-Center Myocardial Perfusion MRI Datasets Using Data Adaptive Uncertainty-guided Space-time Analysis(ArXiv, 2024-08-09) Yalcinkaya, Dilek M.; Youssef, Khalid; Heydari, Bobak; Wei, Janet; Merz, Noel Bairey; Judd, Robert; Dharmakumar, Rohan; Simonetti, Orlando P.; Weinsaft, Jonathan W.; Raman, Subha V.; Sharif, Behzad; Medicine, School of MedicineBackground: Fully automatic analysis of myocardial perfusion MRI datasets enables rapid and objective reporting of stress/rest studies in patients with suspected ischemic heart disease. Developing deep learning techniques that can analyze multi-center datasets despite limited training data and variations in software (pulse sequence) and hardware (scanner vendor) is an ongoing challenge. Methods: Datasets from 3 medical centers acquired at 3T (n = 150 subjects; 21,150 first-pass images) were included: an internal dataset (inD; n = 95) and two external datasets (exDs; n = 55) used for evaluating the robustness of the trained deep neural network (DNN) models against differences in pulse sequence (exD-1) and scanner vendor (exD-2). A subset of inD (n = 85) was used for training/validation of a pool of DNNs for segmentation, all using the same spatiotemporal U-Net architecture and hyperparameters but with different parameter initializations. We employed a space-time sliding-patch analysis approach that automatically yields a pixel-wise "uncertainty map" as a byproduct of the segmentation process. In our approach, dubbed Data Adaptive Uncertainty-Guided Space-time (DAUGS) analysis, a given test case is segmented by all members of the DNN pool and the resulting uncertainty maps are leveraged to automatically select the "best" one among the pool of solutions. For comparison, we also trained a DNN using the established approach with the same settings (hyperparameters, data augmentation, etc.). Results: The proposed DAUGS analysis approach performed similarly to the established approach on the internal dataset (Dice score for the testing subset of inD: 0.896 ± 0.050 vs. 0.890 ± 0.049; p = n.s.) whereas it significantly outperformed on the external datasets (Dice for exD-1: 0.885 ± 0.040 vs. 0.849 ± 0.065, p < 0.005; Dice for exD-2: 0.811 ± 0.070 vs. 0.728 ± 0.149, p < 0.005). Moreover, the number of image series with "failed" segmentation (defined as having myocardial contours that include bloodpool or are noncontiguous in ≥1 segment) was significantly lower for the proposed vs. the established approach (4.3% vs. 17.1%, p < 0.0005). Conclusions: The proposed DAUGS analysis approach has the potential to improve the robustness of deep learning methods for segmentation of multi-center stress perfusion datasets with variations in the choice of pulse sequence, site location or scanner vendor.Item Machine Vision Assisted In Situ Ichthyoplankton Imaging System(2013-07-12) Iyer, Neeraj; Tsechpenakis, Gavriil; Raje, Rajeev; Tuceryan, Mihran; Fang, ShiaofenRecently there has been a lot of effort in developing systems for sampling and automatically classifying plankton from the oceans. Existing methods assume the specimens have already been precisely segmented, or aim at analyzing images containing single specimen (extraction of their features and/or recognition of specimens as single targets in-focus in small images). The resolution in the existing systems is limiting. Our goal is to develop automated, very high resolution image sensing of critically important, yet under-sampled, components of the planktonic community by addressing both the physical sensing system (e.g. camera, lighting, depth of field), as well as crucial image extraction and recognition routines. The objective of this thesis is to develop a framework that aims at (i) the detection and segmentation of all organisms of interest automatically, directly from the raw data, while filtering out the noise and out-of-focus instances, (ii) extract the best features from images and (iii) identify and classify the plankton species. Our approach focusses on utilizing the full computational power of a multicore system by implementing a parallel programming approach that can process large volumes of high resolution plankton images obtained from our newly designed imaging system (In Situ Ichthyoplankton Imaging System (ISIIS)). We compare some of the widely used segmentation methods with emphasis on accuracy and speed to find the one that works best on our data. We design a robust, scalable, fully automated system for high-throughput processing of the ISIIS imagery.Item Spatial Transcriptomic Analysis Reveals Associations between Genes and Cellular Topology in Breast and Prostate Cancers(MDPI, 2022-10-04) Alsaleh, Lujain; Li, Chen; Couetil, Justin L.; Ye, Ze; Huang, Kun; Zhang, Jie; Chen, Chao; Johnson, Travis S.; Biostatistics and Health Data Science, Richard M. Fairbanks School of Public HealthBackground: Cancer is the leading cause of death worldwide with breast and prostate cancer the most common among women and men, respectively. Gene expression and image features are independently prognostic of patient survival; but until the advent of spatial transcriptomics (ST), it was not possible to determine how gene expression of cells was tied to their spatial relationships (i.e., topology). Methods: We identify topology-associated genes (TAGs) that correlate with 700 image topological features (ITFs) in breast and prostate cancer ST samples. Genes and image topological features are independently clustered and correlated with each other. Themes among genes correlated with ITFs are investigated by functional enrichment analysis. Results: Overall, topology-associated genes (TAG) corresponding to extracellular matrix (ECM) and Collagen Type I Trimer gene ontology terms are common to both prostate and breast cancer. In breast cancer specifically, we identify the ZAG-PIP Complex as a TAG. In prostate cancer, we identify distinct TAGs that are enriched for GI dysmotility and the IgA immunoglobulin complex. We identified TAGs in every ST slide regardless of cancer type. Conclusions: These TAGs are enriched for ontology terms, illustrating the biological relevance to our image topology features and their potential utility in diagnostic and prognostic models.Item Video anatomy : spatial-temporal video profile(2014-07-31) Cai, Hongyuan; Zheng, Jiang Yu; Tuceryan, Mihran; Popescu, Voicu Sebastian; Tricoche, Xavier; Prabhakar, Sunil; Gorman, William J.A massive amount of videos are uploaded on video websites, smooth video browsing, editing, retrieval, and summarization are demanded. Most of the videos employ several types of camera operations for expanding field of view, emphasizing events, and expressing cinematic effect. To digest heterogeneous videos in video websites and databases, video clips are profiled to 2D image scroll containing both spatial and temporal information for video preview. The video profile is visually continuous, compact, scalable, and indexing to each frame. This work analyzes the camera kinematics including zoom, translation, and rotation, and categorize camera actions as their combinations. An automatic video summarization framework is proposed and developed. After conventional video clip segmentation and video segmentation for smooth camera operations, the global flow field under all camera actions has been investigated for profiling various types of video. A new algorithm has been designed to extract the major flow direction and convergence factor using condensed images. Then this work proposes a uniform scheme to segment video clips and sections, sample video volume across the major flow, compute flow convergence factor, in order to obtain an intrinsic scene space less influenced by the camera ego-motion. The motion blur technique has also been used to render dynamic targets in the profile. The resulting profile of video can be displayed in a video track to guide the access to video frames, help video editing, and facilitate the applications such as surveillance, visual archiving of environment, video retrieval, and online video preview.