Image Classification on NXP i.MX RT1060 using Ultra-thin MobileNet DNN

dc.contributor.authorDesai, Saurabh Ravindra
dc.contributor.authorSinha, Debjyoti
dc.contributor.authorEl-Sharkawy, Mohamed
dc.contributor.departmentElectrical and Computer Engineering, School of Engineering and Technologyen_US
dc.date.accessioned2021-02-05T21:30:46Z
dc.date.available2021-02-05T21:30:46Z
dc.date.issued2020-01
dc.description.abstractDeep Neural Networks play a very significant role in computer vision applications like image classification, object recognition and detection. They have achieved great success in this field but the main obstacles for deploying a DNN model into an Autonomous Driver Assisted System (ADAS) platform are limited memory, constrained resources, and limited power. MobileNet is a very efficient and light DNN model which was developed mainly for embedded and computer vision applications, but researchers still faced many constraints and challenges to deploy the model into resource-constrained microprocessor units. Design Space Exploration of such CNN models can make them more memory efficient and less computationally intensive. We have used the Design Space Exploration technique to modify the baseline MobileNet V1 model and develop an improved version of it. This paper proposes seven modifications on the existing baseline architecture to develop a new and more efficient model. We use Separable Convolution layers, the width multiplier hyperparamater, alter the channel depth and eliminate the layers with the same output shape to reduce the size of the model. We achieve a good overall accuracy by using the Swish activation function, Random Erasing technique and a choosing good optimizer. We call the new model as Ultra-thin MobileNet which has a much smaller size, lesser number of parameters, less average computation time per epoch and negligible overfitting, with a little higher accuracy as compared to the baseline MobileNet V1. Generally, when an attempt is made to make an existing model more compact, the accuracy decreases. But here, there is no trade off between the accuracy and the model size. The proposed model is developed with the intent to make it deployable in a realtime autonomous development platform with limited memory and power and, keeping the size of the model within 5 MB. It could be successfully deployed into NXP i.MX RT1060 ADAS platform due to its small model size of 3.9 MB. It classifies images of different classes in real-time, with an accuracy of more than 90% when it is run on the above-mentioned ADAS platform. We have trained and tested the proposed architecture from scratch on the CIFAR-10 dataset.en_US
dc.eprint.versionAuthor's manuscripten_US
dc.identifier.citationDesai, S. R., Sinha, D., & El-Sharkawy, M. (2020). Image Classification on NXP i.MX RT1060 using Ultra-thin MobileNet DNN. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), 0474–0480. https://doi.org/10.1109/CCWC47524.2020.9031165en_US
dc.identifier.urihttps://hdl.handle.net/1805/25169
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/CCWC47524.2020.9031165en_US
dc.relation.journal2020 10th Annual Computing and Communication Workshop and Conferenceen_US
dc.rightsPublisher Policyen_US
dc.sourceAuthoren_US
dc.subjectdeep neural networksen_US
dc.subjectautonomous driver assistance systemsen_US
dc.subjectdesign space explorationen_US
dc.titleImage Classification on NXP i.MX RT1060 using Ultra-thin MobileNet DNNen_US
dc.typeConference proceedingsen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Desai2020Image.pdf
Size:
830.61 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: