ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Convolutional neural networks"

Now showing 1 - 6 of 6
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Assessment of Deep Learning Methods for Differentiating Autoimmune Disorders in Ultrasound Images
    (Medical University Publishing House Craiova, 2021) Vasile, Corina Maria; Udriştoiu, Anca Loredana; Ghenea, Alice Elena; Padureanu, Vlad; Udriştoiu, Ştefan; Gruionu, Lucian Gheorghe; Gruionu, Gabriel; Iacob, Andreea Valentina; Popescu, Mihaela; Medicine, School of Medicine
    At present, deep learning becomes an important tool in medical image analysis, with good performance in diagnosing, pattern detection, and segmentation. Ultrasound imaging offers an easy and rapid method to detect and diagnose thyroid disorders. With the help of a computer-aided diagnosis (CAD) system based on deep learning, we have the possibility of real-time and non-invasive diagnosing of thyroidal US images. This paper proposed a study based on deep learning with transfer learning for differentiating the thyroidal ultrasound images using image pixels and diagnosis labels as inputs. We trained, assessed, and compared two pre-trained models (VGG-19 and Inception v3) using a dataset of ultrasound images consisting of 2 types of thyroid ultrasound images: autoimmune and normal. The training dataset consisted of 615 thyroid ultrasound images, from which 415 images were diagnosed as autoimmune, and 200 images as normal. The models were assessed using a dataset of 120 images, from which 80 images were diagnosed as autoimmune, and 40 images diagnosed as normal. The two deep learning models obtained very good results, as follows: the pre-trained VGG-19 model obtained 98.60% for the overall test accuracy with an overall specificity of 98.94% and overall sensitivity of 97.97%, while the Inception v3 model obtained 96.4% for the overall test accuracy with an overall specificity of 95.58% and overall sensitivity of 95.58.
  • Loading...
    Thumbnail Image
    Item
    Compressed MobileNet V3: An efficient CNN for resource constrained platforms
    (2021-05) Prasad, S. P. Kavyashree; El-Sharkawy, Mohamed; King, Brian; Rizkalla, Maher
    Computer Vision is a mathematical tool formulated to extend human vision to machines. This tool can perform various tasks such as object classification, object tracking, motion estimation, and image segmentation. These tasks find their use in many applications, namely robotics, self-driving cars, augmented reality, and mobile applications. However, opposed to the traditional technique of incorporating handcrafted features to understand images, convolution neural networks are being used to perform the same function. Computer vision applications widely use CNNs due to their stellar performance in interpreting images. Over the years, there have been numerous advancements in machine learning, particularly to CNNs.However, the need to improve their accuracy, model size and complexity increased, making their deployment in restricted environments a challenge. Many researchers proposed techniques to reduce the size of CNN while still retaining its accuracy. Few of these include network quantization, pruning, low rank, and sparse decomposition and knowledge distillation. Some methods developed efficient models from scratch. This thesis achieves a similar goal using design space exploration techniques on the latest variant of MobileNets, MobileNet V3. Using DPD blocks, escalation in the number of expansion filters in some layers and mish activation function MobileNet V3 is reduced to 84.96% in size and made 0.2% more accurate. Furthermore, it is deployed in NXP i.MX RT1060 for image classification on CIFAR-10 dataset.
  • Loading...
    Thumbnail Image
    Item
    Convolutional neural network denoising in fluorescence lifetime imaging microscopy (FLIM)
    (SPIE, 2021) Mannam, Varun; Zhang, Yide; Yuan, Xiaotong; Hato, Takashi; Dagher, Pierre C.; Nichols, Evan L.; Smith, Cody J.; Dunn, Kenneth W.; Howard, Scott; Medicine, School of Medicine
    Fluorescence lifetime imaging microscopy (FLIM) systems are limited by their slow processing speed, low signal- to-noise ratio (SNR), and expensive and challenging hardware setups. In this work, we demonstrate applying a denoising convolutional network to improve FLIM SNR. The network will integrated with an instant FLIM system with fast data acquisition based on analog signal processing, high SNR using high-efficiency pulse-modulation, and cost-effective implementation utilizing off-the-shelf radio-frequency components. Our instant FLIM system simultaneously provides the intensity, lifetime, and phasor plots in vivo and ex vivo. By integrating image de- noising using the trained deep learning model on the FLIM data, provide accurate FLIM phasor measurements are obtained. The enhanced phasor is then passed through the K-means clustering segmentation method, an unbiased and unsupervised machine learning technique to separate different fluorophores accurately. Our experimental in vivo mouse kidney results indicate that introducing the deep learning image denoising model before the segmentation effectively removes the noise in the phasor compared to existing methods and provides clearer segments. Hence, the proposed deep learning-based workflow provides fast and accurate automatic segmentation of fluorescence images using instant FLIM. The denoising operation is effective for the segmentation if the FLIM measurements are noisy. The clustering can effectively enhance the detection of biological structures of interest in biomedical imaging applications.
  • Loading...
    Thumbnail Image
    Item
    Domain Adaptation Tracker With Global and Local Searching
    (IEEE, 2018) Zhao, Fei; Zhang, Ting; Wu, Yi; Wang, Jinqiao; Tang, Ming; Medicine, School of Medicine
    For the convolutional neural network (CNN)-based trackers, most of them locate the target only within a local area, which makes the trackers hard to recapture the target after drifting into the background. Besides, most state-of-the-art trackers spend a large amount of time on training the CNN-based classification networks online to adapt to the current domain. In this paper, to address the two problems, we propose a robust domain adaptation tracker based on the CNNs. The proposed tracker contains three CNNs: a local location network (LL-Net), a global location network (GL-Net), and a domain adaptation classification network (DA-Net). For the former problem, if we come to the conclusion that the tracker drifts into the background based on the output of the LL-Net, we will search for the target in a global area of the current frame based on the GL-Net. For the latter problem, we propose a CNN-based DA-Net with a domain adaptation (DA) layer. By pre-training the DA-Net offline, the DA-Net can adapt to the current domain by only updating the parameters of the DA layer in one training iteration when the online training is triggered, which makes the tracker run five times faster than MDNet with comparable tracking performance. The experimental results show that our tracker performs favorably against the state-of-the-art trackers on three popular benchmarks.
  • Loading...
    Thumbnail Image
    Item
    RCN2: Residual Capsule Network V2
    (IEEE Xplore, 2021-06) Anilkumar, Arjun Narukkanchira; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and Technology
    Unlike Convolutional Neural Network (CNN), which works on the shift-invariance in image processing, Capsule Networks can understand hierarchical model relations in depth[1]. This aspect of Capsule Networks let them stand out even when models are enormous in size and have accuracy comparable to the CNNs, which are one-tenth of its size. The capsules in various capsule-based networks were cumbersome due to their intricate algorithm. Recent developments in the field of Capsule Networks have contributed to mitigating this problem. This paper focuses on bringing one of the Capsule Network, Residual Capsule Network (RCN) to a comparable size to modern CNNs and thus restating the importance of Capsule Networks. In this paper, Residual Capsule Network V2 (RCN2) is proposed as an efficient and finer version of RCN with a size of 1.95 M parameters and an accuracy of 85.12% for the CIFAR-10 dataset.
  • Loading...
    Thumbnail Image
    Item
    Stress testing deep learning models for prostate cancer detection on biopsies and surgical specimens
    (Wiley, 2025) Flannery, Brennan T.; Sandler, Howard M.; Lal, Priti; Feldman, Michael D.; Santa-Rosario, Juan C.; Pathak, Tilak; Mirtti, Tuomas; Farre, Xavier; Correa, Rohann; Chafe, Susan; Shah, Amit; Efstathiou, Jason A.; Hoffman, Karen; Hallman, Mark A.; Straza, Michael; Jordan, Richard; Pugh, Stephanie L.; Feng, Felix; Madabhushi, Anant; Pathology and Laboratory Medicine, School of Medicine
    The presence, location, and extent of prostate cancer is assessed by pathologists using H&E-stained tissue slides. Machine learning approaches can accomplish these tasks for both biopsies and radical prostatectomies. Deep learning approaches using convolutional neural networks (CNNs) have been shown to identify cancer in pathologic slides, some securing regulatory approval for clinical use. However, differences in sample processing can subtly alter the morphology between sample types, making it unclear whether deep learning algorithms will consistently work on both types of slide images. Our goal was to investigate whether morphological differences between sample types affected the performance of biopsy-trained cancer detection CNN models when applied to radical prostatectomies and vice versa using multiple cohorts (N = 1,000). Radical prostatectomies (N = 100) and biopsies (N = 50) were acquired from The University of Pennsylvania to train (80%) and validate (20%) a DenseNet CNN for biopsies (MB), radical prostatectomies (MR), and a combined dataset (MB+R). On a tile level, MB and MR achieved F1 scores greater than 0.88 when applied to their own sample type but less than 0.65 when applied across sample types. On a whole-slide level, models achieved significantly better performance on their own sample type compared to the alternative model (p < 0.05) for all metrics. This was confirmed by external validation using digitized biopsy slide images from a clinical trial [NRG Radiation Therapy Oncology Group (RTOG)] (NRG/RTOG 0521, N = 750) via both qualitative and quantitative analyses (p < 0.05). A comprehensive review of model outputs revealed morphologically driven decision making that adversely affected model performance. MB appeared to be challenged with the analysis of open gland structures, whereas MR appeared to be challenged with closed gland structures, indicating potential morphological variation between the training sets. These findings suggest that differences in morphology and heterogeneity necessitate the need for more tailored, sample-specific (i.e. biopsy and surgical) machine learning models.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University