- Browse by Subject
Browsing by Subject "convolution neural networks"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item High Performance SqueezeNext for CIFAR-10(IEEE, 2019-07) Duggal, Jayan Kant; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyCNNs is the foundation for deep learning and computer vision domain enabling applications such as autonomous driving, face recognition, automatic radiology image reading, etc. But, CNN is a algorithm which is memory and computationally intensive. DSE of neural networks and compression techniques have made convolution neural networks memory and computationally efficient. It improved the CNN architectures and made it more suitable to implement on real-time embedded systems. This paper proposes an efficient and a compact CNN to ameliorate the performance of existing CNN architectures. The intuition behind this proposed architecture is to supplant convolution layers with a more sophisticated block module and to develop a compact architecture with a competitive accuracy. Further, explores the bottleneck module and squeezenext basic block structure. The state-of-the-art squeezenext baseline architecture is used as a foundation to recreate and propose a high performance squeezenext architecture. The proposed architecture is further trained on the CIFAR-10 dataset from scratch. All the training and testing results are visualized with live loss and accuracy graphs. Focus of this paper is to make an adaptable and a flexible model for efficient CNN performance which can perform better with the minimum tradeoff between model accuracy, size, and speed. Finally, the conclusion is made that the performance of CNN can be improved by developing an architecture for a specific dataset. The purpose of this paper is to introduce and propose high performance squeezenext for CIFAR-10.Item Residual Capsule Network(IEEE, 2019-10) Bhamidi, Sree Bala Shruthi; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyConvolution Neural Network (CNN) has been the most influential innovations in the filed of Computer Vision. CNN have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks - CNN need a large dataset, hyperparameter tuning is nontrivial and importantly, they lose all the internal information about pose and transformation to pooling. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. On the other hand, deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Simply adding layers to make the network deep has led to vanishing gradient problem. Residual Networks introduce skip connections to ease the training and have shown evidence that they can give good accuracy with considerable depth. Putting the best of Capsule Network and Residual Network together, we present Residual Capsule Network, a framework that uses the best features of both Residual and Capsule Networks. In the proposed model, the conventional Convolutional layer in Capsule Network is replaced by skip connections like the Residual Networks to decrease the complexity of the Baseline Capsule Network and seven ensemble Capsule Network. We trained our model on MNIST and CIFAR-10 datasets and have noted a significant decrease in the number of parameters when compared to the Baseline models.Item RMNv2: Reduced Mobilenet V2 for CIFAR10(IEEE, 2020-01) Ayi, Maneesh; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyIn this paper, we developed a new architecture called Reduced Mobilenet V2 (RMNv2) for CIFAR10 dataset. The baseline architecture of our network is Mobilenet V2. RMNv2 is architecturally modified version of Mobilenet V2. The proposed model has a total number of parameters of 1.06M which is 52.2% lesser than the baseline model. The overall accuracy of RMNv2 for CIFAR10 dataset is 92.4% which is 1.9% lesser than the baseline model. The architectural modifications involve heterogeneous kernel-based convolutions, mish activation, etc. Also, we include a data augmentation technique called AutoAugment that contributes to increasing accuracy of our model. This architectural modification makes the model suitable for resource-constrained devices like embedded devices, mobile devices deployment for real-time applications like autonomous vehicles, object recognition, etc.Item Shallow SqueezeNext: An Efficient & Shallow DNN(IEEE, 2019-09) Duggal, Jayan Kant; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyCNN has gained great success in many applications but the major design hurdles for deploying CNN on driver assistance systems or ADAS are limited computation, memory resource, and power budget. Recently, there has been greater exploration into small DNN architectures, such as SqueezeNet and SqueezeNext architectures. In this paper, the proposed Shallow SqueezeNext architecture for driver assistance systems achieves better model size with a good model accuracy and speed in comparison to baseline SqueezeNet and SqueezeNext architectures. The proposed architecture is compact, efficient and flexible in terms of model size and accuracy with minimum tradeoffs and less penalty. The proposed Shallow SqueezeNext uses SqueezeNext architecture as its motivation and foundation. The proposed architecture is developed with intention for implementation or deployment on a real-time autonomous system platform and to keep the model size less than 5 MB. Due to its extremely small model size, 0.370 MB with a competitive model accuracy of 82.44 %, decent both training and testing model speed of 7 seconds, it can be successfully deployed on ADAS, driver assistance systems or a real time autonomous system platform such as BlueBox2.0 by NXP. The proposed Shallow SqueezeNext architecture is trained and tested from scratch on CIFAR-10 dataset for developing a dataset specific trained model.Item Squeeze-and-Excitation SqueezeNext: An Efficient DNN for Hardware Deployment(IEEE, 2020-01) Chappa, Ravi Teja N. V. S.; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyConvolution neural network is being used in field of autonomous driving vehicles or driver assistance systems (ADAS), and has achieved great success. Before the convolution neural network, traditional machine learning algorithms helped the driver assistance systems. Currently, there is a great exploration being done in architectures like MobileNet, SqueezeNext & SqueezeNet. It improved the CNN architectures and made it more suitable to implement on real-time embedded systems. This paper proposes an efficient and a compact CNN to ameliorate the performance of existing CNN architectures. The intuition behind this proposed architecture is to supplant convolution layers with a more sophisticated block module and to develop a compact architecture with a competitive accuracy. Further, explores the bottleneck module and squeezenext basic block structure. The state-of-the-art squeezenext baseline architecture is used as a foundation to recreate and propose a high performance squeezenext architecture. The proposed architecture is further trained on the CIFAR-10 dataset from scratch. All the training and testing results are visualized with live loss and accuracy graphs. Focus of this paper is to make an adaptable and a flexible model for efficient CNN performance which can perform better with the minimum tradeoff between model accuracy, size, and speed. Having a model size of 0.595MB along with accuracy of 92.60% and with a satisfactory training and validating speed of 9 seconds, this model can be deployed on real-time autonomous system platform such as Bluebox 2.0 by NXP.