- Browse by Author
Browsing by Author "Plebani, Emanuele"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item High-throughput segmentation of unmyelinated axons by deep learning(Springer Nature, 2022-01-24) Plebani, Emanuele; Biscola, Natalia P.; Havton, Leif A.; Rajwa, Bartek; Shemonti, Abida Sanjana; Jaffey, Deborah; Powley, Terry; Keast, Janet R.; Lu, Kun‑Han; Dundar, M. Murat; Computer and Information Science, School of ScienceAxonal characterizations of connectomes in healthy and disease phenotypes are surprisingly incomplete and biased because unmyelinated axons, the most prevalent type of fibers in the nervous system, have largely been ignored as their quantitative assessment quickly becomes unmanageable as the number of axons increases. Herein, we introduce the first prototype of a high-throughput processing pipeline for automated segmentation of unmyelinated fibers. Our team has used transmission electron microscopy images of vagus and pelvic nerves in rats. All unmyelinated axons in these images are individually annotated and used as labeled data to train and validate a deep instance segmentation network. We investigate the effect of different training strategies on the overall segmentation accuracy of the network. We extensively validate the segmentation algorithm as a stand-alone segmentation tool as well as in an expert-in-the-loop hybrid segmentation setting with preliminary, albeit remarkably encouraging results. Our algorithm achieves an instance-level F1 score of between 0.7 and 0.9 on various test images in the stand-alone mode and reduces expert annotation labor by 80% in the hybrid setting. We hope that this new high-throughput segmentation pipeline will enable quick and accurate characterization of unmyelinated fibers at scale and become instrumental in significantly advancing our understanding of connectomes in both the peripheral and the central nervous systems.Item A machine learning toolkit for CRISM image analysis(Elsevier, 2022-04) Plebani, Emanuele; Ehlmann, Bethany L.; Leask, Ellen K.; Fox, Valerie K.; Dundar, M. Murat; Computer and Information Science, School of ScienceHyperspectral images collected by remote sensing have played a significant role in the discovery of aqueous alteration minerals, which in turn have important implications for our understanding of the changing habitability on Mars. Traditional spectral analyzes based on summary parameters have been helpful in converting hyperspectral cubes into readily visualizable three channel maps highlighting high-level mineral composition of the Martian terrain. These maps have been used as a starting point in the search for specific mineral phases in images. Although the amount of labor needed to verify the presence of a mineral phase in an image is quite limited for phases that emerge with high abundance, manual processing becomes laborious when the task involves determining the spatial extent of detected phases or identifying small outcrops of secondary phases that appear in only a few pixels within an image. Thanks to extensive use of remote sensing data and rover expeditions, significant domain knowledge has accumulated over the years about mineral composition of several regions of interest on Mars, which allow us to collect reliable labeled data required to train machine learning algorithms. In this study we demonstrate the utility of machine learning in two essential tasks for hyperspectral data analysis: nonlinear noise removal and mineral classification. We develop a simple yet effective hierarchical Bayesian model for estimating distributions of spectral patterns and extensively validate this model for mineral classification on several test images. Our results demonstrate that machine learning can be highly effective in exposing tiny outcrops of specific phases in orbital data that are not uncovered by traditional spectral analysis. We package implemented scripts, documentation illustrating use cases, and pixel-scale training data collected from dozens of well-characterized images into a new toolkit. We hope that this new toolkit will provide advanced and effective processing tools and improve community’s ability to map compositional units in remote sensing data quickly, accurately, and at scale.Item Textflow: Screenless Access to Non-Visual Smart Messaging(ACM, 2021-04) Karimi, Pegah; Plebani, Emanuele; Bolchini, Davide; Human-Centered Computing, School of Informatics and ComputingTexting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow: a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device, without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios.Item Unraveling Complexity: Panoptic Segmentation in Cellular and Space Imagery(2024-05) Plebani, Emanuele; Dundar, Murat; Tuceryan, Mihran; Tsechpenakis, Gavriil; Al Hasan, MohammadAdvancements in machine learning, especially deep learning, have facilitated the creation of models capable of performing tasks previously thought impossible. This progress has opened new possibilities across diverse fields such as medical imaging and remote sensing. However, the performance of these models relies heavily on the availability of extensive labeled datasets. Collecting large amounts of labeled data poses a significant financial burden, particularly in specialized fields like medical imaging and remote sensing, where annotation requires expert knowledge. To address this challenge, various methods have been developed to mitigate the necessity for labeled data or leverage information contained in unlabeled data. These encompass include self-supervised learning, few-shot learning, and semi-supervised learning. This dissertation centers on the application of semi-supervised learning in segmentation tasks. We focus on panoptic segmentation, a task that combines semantic segmentation (assigning a class to each pixel) and instance segmentation (grouping pixels into different object instances). We choose two segmentation tasks in different domains: nerve segmentation in microscopic imaging and hyperspectral segmentation in satellite images from Mars. Our study reveals that, while direct application of methods developed for natural images may yield low performance, targeted modifications or the development of robust models can provide satisfactory results, thereby unlocking new applications like machine-assisted annotation of new data. This dissertation begins with a challenging panoptic segmentation problem in microscopic imaging, systematically exploring model architectures to improve generalization. Subsequently, it investigates how semi-supervised learning may mitigate the need for annotated data. It then moves to hyperspectral imaging, introducing a Hierarchical Bayesian model (HBM) to robustly classify single pixels. Key contributions of include developing a state-of-the-art U-Net model for nerve segmentation, improving the model's ability to segment different cellular structures, evaluating semi-supervised learning methods in the same setting, and proposing HBM for hyperspectral segmentation. The dissertation also provides a dataset of labeled CRISM pixels and mineral detections, and a software toolbox implementing the full HBM pipeline, to facilitate the development of new models.