- Browse by Subject
Browsing by Subject "autonomous driver assistance systems"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Image Classification on NXP i.MX RT1060 using Ultra-thin MobileNet DNN(IEEE, 2020-01) Desai, Saurabh Ravindra; Sinha, Debjyoti; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyDeep Neural Networks play a very significant role in computer vision applications like image classification, object recognition and detection. They have achieved great success in this field but the main obstacles for deploying a DNN model into an Autonomous Driver Assisted System (ADAS) platform are limited memory, constrained resources, and limited power. MobileNet is a very efficient and light DNN model which was developed mainly for embedded and computer vision applications, but researchers still faced many constraints and challenges to deploy the model into resource-constrained microprocessor units. Design Space Exploration of such CNN models can make them more memory efficient and less computationally intensive. We have used the Design Space Exploration technique to modify the baseline MobileNet V1 model and develop an improved version of it. This paper proposes seven modifications on the existing baseline architecture to develop a new and more efficient model. We use Separable Convolution layers, the width multiplier hyperparamater, alter the channel depth and eliminate the layers with the same output shape to reduce the size of the model. We achieve a good overall accuracy by using the Swish activation function, Random Erasing technique and a choosing good optimizer. We call the new model as Ultra-thin MobileNet which has a much smaller size, lesser number of parameters, less average computation time per epoch and negligible overfitting, with a little higher accuracy as compared to the baseline MobileNet V1. Generally, when an attempt is made to make an existing model more compact, the accuracy decreases. But here, there is no trade off between the accuracy and the model size. The proposed model is developed with the intent to make it deployable in a realtime autonomous development platform with limited memory and power and, keeping the size of the model within 5 MB. It could be successfully deployed into NXP i.MX RT1060 ADAS platform due to its small model size of 3.9 MB. It classifies images of different classes in real-time, with an accuracy of more than 90% when it is run on the above-mentioned ADAS platform. We have trained and tested the proposed architecture from scratch on the CIFAR-10 dataset.Item Shallow SqueezeNext: An Efficient & Shallow DNN(IEEE, 2019-09) Duggal, Jayan Kant; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyCNN has gained great success in many applications but the major design hurdles for deploying CNN on driver assistance systems or ADAS are limited computation, memory resource, and power budget. Recently, there has been greater exploration into small DNN architectures, such as SqueezeNet and SqueezeNext architectures. In this paper, the proposed Shallow SqueezeNext architecture for driver assistance systems achieves better model size with a good model accuracy and speed in comparison to baseline SqueezeNet and SqueezeNext architectures. The proposed architecture is compact, efficient and flexible in terms of model size and accuracy with minimum tradeoffs and less penalty. The proposed Shallow SqueezeNext uses SqueezeNext architecture as its motivation and foundation. The proposed architecture is developed with intention for implementation or deployment on a real-time autonomous system platform and to keep the model size less than 5 MB. Due to its extremely small model size, 0.370 MB with a competitive model accuracy of 82.44 %, decent both training and testing model speed of 7 seconds, it can be successfully deployed on ADAS, driver assistance systems or a real time autonomous system platform such as BlueBox2.0 by NXP. The proposed Shallow SqueezeNext architecture is trained and tested from scratch on CIFAR-10 dataset for developing a dataset specific trained model.