ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "Pruning"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Efficient Intelligence Towards Real-Time Precision Medicine With Systematic Pruning and Quantization
    (2024-08) Karunakaran, Maneesh; Zhang, Qingxue; King, Brian; Maher, Rizkalla E.
    The widespread adoption of Convolutional Neural Networks (CNNs) in real-world applications, particularly on resource-constrained devices, is hindered by their computational complexity and memory requirements. This research investigates the application of pruning and quantization techniques to optimize CNNs for arrhythmia classification using the MIT-BIH Arrhythmia Database. By combining magnitude-based pruning, regularization-based pruning, filter map-based pruning, and quantization at different bit-widths (4-bit, 8-bit, 2-bit, and 1-bit), the study aims to develop a more compact and efficient CNN model while maintaining high accuracy. The experimental results demonstrate that these techniques effectively reduce model size, improve inference speed, and maintain accuracy, adapting them for use on devices with limited resources. The findings highlight the potential of these optimization techniques for real-time applications in mobile health monitoring and edge computing, paving the way for broader adoption of deep learning in resource-limited environments.
  • Loading...
    Thumbnail Image
    Item
    Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment
    (2018-12) Gaikwad, Akash S.; El-Sharkawy, Mohamed; Rizkalla, Maher; King, Brian
    In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems. This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model. 1: Pruning based on Taylor expansion of change in cost function Delta C. 2: Pruning based on L2 normalization of activation maps. 3: Pruning based on a combination of method 1 and method 2. The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University