Sulistiyo, Mahmud DwiKawanishi, YasutomoDeguchi, DaisukeIde, IchiroHirayama, TakatsuguZheng, Jiang-YuMurase, Hiroshi2022-01-062022-01-062020Sulistiyo, M. D., Kawanishi, Y., Deguchi, D., Ide, I., Hirayama, T., Zheng, J.-Y., & Murase, H. (2020). Attribute-Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian Orientations. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E103.A(1), 231–242. https://doi.org/10.1587/transfun.2019TSP0001https://hdl.handle.net/1805/27295Numerous applications such as autonomous driving, satellite imagery sensing, and biomedical imaging use computer vision as an important tool for perception tasks. For Intelligent Transportation Systems (ITS), it is required to precisely recognize and locate scenes in sensor data. Semantic segmentation is one of computer vision methods intended to perform such tasks. However, the existing semantic segmentation tasks label each pixel with a single object's class. Recognizing object attributes, e.g., pedestrian orientation, will be more informative and help for a better scene understanding. Thus, we propose a method to perform semantic segmentation with pedestrian attribute recognition simultaneously. We introduce an attribute-aware loss function that can be applied to an arbitrary base model. Furthermore, a re-annotation to the existing Cityscapes dataset enriches the ground-truth labels by annotating the attributes of pedestrian orientation. We implement the proposed method and compare the experimental results with others. The attribute-aware semantic segmentation shows the ability to outperform baseline methods both in the traditional object segmentation task and the expanded attribute detection task.enPublisher Policyattribute-awaredeep neural networkpedestrian orientationAttribute-Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian OrientationsArticle