Image Classification on NXP i.MX RT1060 using Ultra-thin MobileNet DNN

If you need an accessible version of this item, please email your request to digschol@iu.edu so that they may create one and provide it to you.
Date
2020-01
Language
English
Embargo Lift Date
Committee Members
Degree
Degree Year
Department
Grantor
Journal Title
Journal ISSN
Volume Title
Found At
IEEE
Abstract

Deep Neural Networks play a very significant role in computer vision applications like image classification, object recognition and detection. They have achieved great success in this field but the main obstacles for deploying a DNN model into an Autonomous Driver Assisted System (ADAS) platform are limited memory, constrained resources, and limited power. MobileNet is a very efficient and light DNN model which was developed mainly for embedded and computer vision applications, but researchers still faced many constraints and challenges to deploy the model into resource-constrained microprocessor units. Design Space Exploration of such CNN models can make them more memory efficient and less computationally intensive. We have used the Design Space Exploration technique to modify the baseline MobileNet V1 model and develop an improved version of it. This paper proposes seven modifications on the existing baseline architecture to develop a new and more efficient model. We use Separable Convolution layers, the width multiplier hyperparamater, alter the channel depth and eliminate the layers with the same output shape to reduce the size of the model. We achieve a good overall accuracy by using the Swish activation function, Random Erasing technique and a choosing good optimizer. We call the new model as Ultra-thin MobileNet which has a much smaller size, lesser number of parameters, less average computation time per epoch and negligible overfitting, with a little higher accuracy as compared to the baseline MobileNet V1. Generally, when an attempt is made to make an existing model more compact, the accuracy decreases. But here, there is no trade off between the accuracy and the model size. The proposed model is developed with the intent to make it deployable in a realtime autonomous development platform with limited memory and power and, keeping the size of the model within 5 MB. It could be successfully deployed into NXP i.MX RT1060 ADAS platform due to its small model size of 3.9 MB. It classifies images of different classes in real-time, with an accuracy of more than 90% when it is run on the above-mentioned ADAS platform. We have trained and tested the proposed architecture from scratch on the CIFAR-10 dataset.

Description
item.page.description.tableofcontents
item.page.relation.haspart
Cite As
Desai, S. R., Sinha, D., & El-Sharkawy, M. (2020). Image Classification on NXP i.MX RT1060 using Ultra-thin MobileNet DNN. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC), 0474–0480. https://doi.org/10.1109/CCWC47524.2020.9031165
ISSN
Publisher
Series/Report
Sponsorship
Major
Extent
Identifier
Relation
Journal
2020 10th Annual Computing and Communication Workshop and Conference
Source
Author
Alternative Title
Type
Conference proceedings
Number
Volume
Conference Dates
Conference Host
Conference Location
Conference Name
Conference Panel
Conference Secretariat Location
Version
Author's manuscript
Full Text Available at
This item is under embargo {{howLong}}