ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Bhamidi, Sree Bala Shruthi"

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    3-Level Residual Capsule Network for Complex Datasets
    (IEEE, 2020-02) Bhamidi, Sree Bala Shruthi; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and Technology
    The Convolutional Neural Network (CNN) have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. Deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Residual Networks ease the training and have shown evidence that they can give good accuracy with considerable depth. Residual Capsule Network [15] has put the Residual Network and Capsule Network together. Though it did well on simple dataset such as MNIST, the architecture can be improved to do better on complex datasets like CIFAR-10. This brings us to the idea of 3-Level Residual Capsule which not only decreases the number of parameters when compared to the seven-ensemble model, but also performs better on complex datasets when compared to Residual Capsule Network.
  • Loading...
    Thumbnail Image
    Item
    Residual Capsule Network
    (2019-08) Bhamidi, Sree Bala Shruthi; El-Sharkawy, Mohamed; King, Brian; Rizkalla, Maher
    The Convolutional Neural Network (CNN) have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. Deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Residual Networks ease the training and have shown evidence that they can give good accuracy with considerable depth. Putting the best of Capsule Network and Residual Network together, we present Residual Capsule Network and 3-Level Residual Capsule Network, a framework that uses the best of Residual Networks and Capsule Networks. The conventional Convolutional layer in Capsule Network is replaced by skip connections like the Residual Networks to decrease the complexity of the Baseline Capsule Network and seven ensemble Capsule Network. We trained our models on MNIST and CIFAR-10 datasets and have seen a significant decrease in the number of parameters when compared to the Baseline models.
  • Loading...
    Thumbnail Image
    Item
    Residual Capsule Network
    (IEEE, 2019-10) Bhamidi, Sree Bala Shruthi; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and Technology
    Convolution Neural Network (CNN) has been the most influential innovations in the filed of Computer Vision. CNN have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks - CNN need a large dataset, hyperparameter tuning is nontrivial and importantly, they lose all the internal information about pose and transformation to pooling. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. On the other hand, deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Simply adding layers to make the network deep has led to vanishing gradient problem. Residual Networks introduce skip connections to ease the training and have shown evidence that they can give good accuracy with considerable depth. Putting the best of Capsule Network and Residual Network together, we present Residual Capsule Network, a framework that uses the best features of both Residual and Capsule Networks. In the proposed model, the conventional Convolutional layer in Capsule Network is replaced by skip connections like the Residual Networks to decrease the complexity of the Baseline Capsule Network and seven ensemble Capsule Network. We trained our model on MNIST and CIFAR-10 datasets and have noted a significant decrease in the number of parameters when compared to the Baseline models.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University