Duggal, Jayan KantEl-Sharkawy, Mohamed2021-02-052021-02-052019-07Duggal, J. K., & El-Sharkawy, M. (2019). High Performance SqueezeNext for CIFAR-10. 2019 IEEE National Aerospace and Electronics Conference (NAECON), 285–290. https://doi.org/10.1109/NAECON46414.2019.9058217https://hdl.handle.net/1805/25168CNNs is the foundation for deep learning and computer vision domain enabling applications such as autonomous driving, face recognition, automatic radiology image reading, etc. But, CNN is a algorithm which is memory and computationally intensive. DSE of neural networks and compression techniques have made convolution neural networks memory and computationally efficient. It improved the CNN architectures and made it more suitable to implement on real-time embedded systems. This paper proposes an efficient and a compact CNN to ameliorate the performance of existing CNN architectures. The intuition behind this proposed architecture is to supplant convolution layers with a more sophisticated block module and to develop a compact architecture with a competitive accuracy. Further, explores the bottleneck module and squeezenext basic block structure. The state-of-the-art squeezenext baseline architecture is used as a foundation to recreate and propose a high performance squeezenext architecture. The proposed architecture is further trained on the CIFAR-10 dataset from scratch. All the training and testing results are visualized with live loss and accuracy graphs. Focus of this paper is to make an adaptable and a flexible model for efficient CNN performance which can perform better with the minimum tradeoff between model accuracy, size, and speed. Finally, the conclusion is made that the performance of CNN can be improved by developing an architecture for a specific dataset. The purpose of this paper is to introduce and propose high performance squeezenext for CIFAR-10.enPublisher Policyconvolution neural networksdeep neural networksdesign space explorationHigh Performance SqueezeNext for CIFAR-10Conference proceedings