- Browse by Subject
Browsing by Subject "deep neural network"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Attribute-Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian Orientations(J-Stage, 2020) Sulistiyo, Mahmud Dwi; Kawanishi, Yasutomo; Deguchi, Daisuke; Ide, Ichiro; Hirayama, Takatsugu; Zheng, Jiang-Yu; Murase, Hiroshi; Computer and Information Science, School of ScienceNumerous applications such as autonomous driving, satellite imagery sensing, and biomedical imaging use computer vision as an important tool for perception tasks. For Intelligent Transportation Systems (ITS), it is required to precisely recognize and locate scenes in sensor data. Semantic segmentation is one of computer vision methods intended to perform such tasks. However, the existing semantic segmentation tasks label each pixel with a single object's class. Recognizing object attributes, e.g., pedestrian orientation, will be more informative and help for a better scene understanding. Thus, we propose a method to perform semantic segmentation with pedestrian attribute recognition simultaneously. We introduce an attribute-aware loss function that can be applied to an arbitrary base model. Furthermore, a re-annotation to the existing Cityscapes dataset enriches the ground-truth labels by annotating the attributes of pedestrian orientation. We implement the proposed method and compare the experimental results with others. The attribute-aware semantic segmentation shows the ability to outperform baseline methods both in the traditional object segmentation task and the expanded attribute detection task.Item Real-time Implementation of RMNv2 Classifier in NXP Bluebox 2.0 and NXP i.MX RT1060(IEEE, 2020-08) Ayi, Maneesh; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyWith regards to Advanced Driver Assistance Systems in vehicles, vision and image-based ADAS is profoundly well known since it utilizes Computer vision algorithms, for example, object detection, street sign identification, vehicle control, impact cautioning, and so on., to aid sheltered and smart driving. Deploying these algorithms directly in resource-constrained devices like mobile and embedded devices etc. is not possible. Reduced Mobilenet V2 (RMNv2) is one of those models which is specifically designed for deploying easily in embedded and mobile devices. In this paper, we implemented a real-time RMNv2 image classifier in NXP Bluebox 2.0 and NXP i.MX RT1060. Because of its low model size of 4.3MB, it is very successful to implement this model in those devices. The model is trained and tested with the CIFAR10 dataset.Item Shallow SqueezeNext: Real Time Deployment on Bluebox2.0 with 272KB Model Size(Science, 2020-12) Duggal, Jayan Kant; El-Sharkawy, Mohamed; Electrical and Computer Engineering, School of Engineering and TechnologyThe significant challenges for deploying CNNs/DNNs on ADAS are limited computation and memory resources with very limited efficiency. Design space exploration of CNNs or DNNS, training and testing DNN from scratch, hyper parameter tuning, implementation with different optimizers contributed towards the efficiency and performance improvement of the Shallow SqueezeNext architecture. It is also computationally efficient, inexpensive and requires minimum memory resources. It achieves better model size and speed in comparison to other counterparts such as AlexNet, VGGnet, SqueezeNet, and SqueezeNext, trained and tested from scratch on datasets such as CIFAR-10 and CIFAR-100. It can achieve the least model size of 272KB with a model accuracy of 82%, a model speed of 9 seconds per epoch, and tested on the CIFAR-10 dataset. It achieved the best accuracy of 91.41%, best model size of 0.272 MB, and best model speed of 4 seconds per epoch. Memory resources are of high importance when it comes down to real time system or platforms because usually the memory is quite limited. To verify that the Shallow SqueezeNext can be successfully deployed on a real time platform, bluebox2.0 by NXP was used. Bluebox2.0 deployment of Shallow SqueezeNext architecture achieved a model accuracy of 90.50%, 8.72MB model size and 22 seconds per epoch model speed. There is another version of the Shallow SqueezeNext which performed better that attained a model size of 0.5MB with model accuracy of 87.30% and 11 seconds per epoch model speed trained and tested from scratch on CIFAR-10 dataset.