- Browse by Author
Browsing by Author "Sinha, Debjyoti"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Design Space Exploration of MobileNet for Suitable Hardware Deployment(2020-05) Sinha, Debjyoti; El-Sharkawy, Mohamed; King, Brian; Rizkalla, MaherDesigning self-regulating machines that can see and comprehend various real world objects around it are the main purpose of the AI domain. Recently, there has been marked advancements in the field of deep learning to create state-of-the-art DNNs for various CV applications. It is challenging to deploy these DNNs into resource-constrained micro-controller units as often they are quite memory intensive. Design Space Exploration is a technique which makes CNN/DNN memory efficient and more flexible to be deployed into resource-constrained hardware. MobileNet is small DNN architecture which was designed for embedded and mobile vision, but still researchers faced many challenges in deploying this model into resource limited real-time processors. This thesis, proposes three new DNN architectures, which are developed using the Design Space Exploration technique. The state-of-the art MobileNet baseline architecture is used as foundation to propose these DNN architectures in this study. They are enhanced versions of the baseline MobileNet architecture. DSE techniques like data augmentation, architecture tuning, and architecture modification have been done to improve the baseline architecture. First, the Thin MobileNet architecture is proposed which uses more intricate block modules as compared to the baseline MobileNet. It is a compact, efficient and flexible architecture with good model accuracy. To get a more compact models, the KilobyteNet and the Ultra-thin MobileNet DNN architecture is proposed. Interesting techniques like channel depth alteration and hyperparameter tuning are introduced along-with some of the techniques used for designing the Thin MobileNet. All the models are trained and validated from scratch on the CIFAR-10 dataset. The experimental results (training and testing) can be visualized using the live accuracy and logloss graphs provided by the Liveloss package. The Ultra-thin MobileNet model is more balanced in terms of the model accuracy and model size out of the three and hence it is deployed into the NXP i.MX RT1060 embedded hardware unit for image classification application.Item Image Classification on NXP i.MX RT1060 using Ultra-thin MobileNet DNN(IEEE, 2020-01) Desai, Saurabh Ravindra; Sinha, Debjyoti; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyDeep Neural Networks play a very significant role in computer vision applications like image classification, object recognition and detection. They have achieved great success in this field but the main obstacles for deploying a DNN model into an Autonomous Driver Assisted System (ADAS) platform are limited memory, constrained resources, and limited power. MobileNet is a very efficient and light DNN model which was developed mainly for embedded and computer vision applications, but researchers still faced many constraints and challenges to deploy the model into resource-constrained microprocessor units. Design Space Exploration of such CNN models can make them more memory efficient and less computationally intensive. We have used the Design Space Exploration technique to modify the baseline MobileNet V1 model and develop an improved version of it. This paper proposes seven modifications on the existing baseline architecture to develop a new and more efficient model. We use Separable Convolution layers, the width multiplier hyperparamater, alter the channel depth and eliminate the layers with the same output shape to reduce the size of the model. We achieve a good overall accuracy by using the Swish activation function, Random Erasing technique and a choosing good optimizer. We call the new model as Ultra-thin MobileNet which has a much smaller size, lesser number of parameters, less average computation time per epoch and negligible overfitting, with a little higher accuracy as compared to the baseline MobileNet V1. Generally, when an attempt is made to make an existing model more compact, the accuracy decreases. But here, there is no trade off between the accuracy and the model size. The proposed model is developed with the intent to make it deployable in a realtime autonomous development platform with limited memory and power and, keeping the size of the model within 5 MB. It could be successfully deployed into NXP i.MX RT1060 ADAS platform due to its small model size of 3.9 MB. It classifies images of different classes in real-time, with an accuracy of more than 90% when it is run on the above-mentioned ADAS platform. We have trained and tested the proposed architecture from scratch on the CIFAR-10 dataset.Item Thin MobileNet: An Enhanced MobileNet Architecture(IEEE, 2019-10) Sinha, Debjyoti; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyIn the field of computer, mobile and embedded vision Convolutional Neural Networks (CNNs) are deep learning models which play a significant role in object detection and recognition. MobileNet is one such efficient, light-weighted model for this purpose, but there are many constraints or challenges for the hardware deployment of such architectures into resource-constrained micro-controller units due to limited memory, energy and power. Also, the overall accuracy of the model generally decreases when the size and the total number of parameters are reduced by any method such as pruning or deep compression. The paper proposes three hybrid MobileNet architectures which has improved accuracy along-with reduced size, lesser number of layers, lower average computation time and very less overfitting as compared to the baseline MobileNet v1. The reason behind developing these models is to have a variant of the existing MobileNet model which will be easily deployable in memory constrained MCUs. We name the model having the smallest size (9.9 MB) as Thin MobileNet. We achieve an increase in accuracy by replacing the standard non-linear activation function ReLU with Drop Activation and introducing Random erasing regularization technique in place of drop out. The model size is reduced by using Separable Convolutions instead of the Depthwise separable convolutions used in the baseline MobileNet. Later on, we make our model shallow by eliminating a few unnecessary layers without a drop in the accuracy. The experimental results are based on training the model on CIFAR-10 dataset.