CNN-based network has Network Anisotropy -work harder to learn rotated feature than non-rotated feature
dc.contributor.author | Dale, Ashley S. | |
dc.contributor.author | Qui, Mei | |
dc.contributor.author | Christopher, Lauren | |
dc.contributor.author | Krogg, Wen | |
dc.contributor.author | William, Albert | |
dc.contributor.department | Electrical and Computer Engineering, School of Engineering and Technology | |
dc.date.accessioned | 2023-10-16T13:44:30Z | |
dc.date.available | 2023-10-16T13:44:30Z | |
dc.date.issued | 2022-10 | |
dc.description.abstract | Successful object identification and classification in a generic Convolutional Neural Network (CNN) depends on object orientation. We expect CNN-based architectures to work harder to learn a rotated version of a feature than when learning the same feature in its default orientation. We name this phenomenon “Network Anisotropy”. A data set of 6000 RGB and grayscale images was created with rotated orientations of a feature predetermined and evenly distributed across four classes: 0°, 30°, 60°, 90°. Four ResNet (18, 34, 50, 101) classifier architectures were trained and the confidence scores were used to represent prediction accuracy. The results show that in all networks, training performance lags several epochs for the 30° and 60° rotation predictions compared to the 0° and 90° rotations, indicating a quantifiable network anisotropy. Because 0° and 90° both lie along a single rectilinear axis that coincides with the convolutional kernel of the CNN, we expect the classifier to do better on these two classes than on 30° and 60° classes. This work confirms that CNN architectures may have weaker performance based on feature orientation alone, independent of the feature distribution within the data set or the correlation of features within an image. | |
dc.eprint.version | Author's manuscript | |
dc.identifier.citation | Dale, A. S., Qiu, M., Christopher, L., Krogg, W., & William, A. (2022). CNN-based network has Network Anisotropy -work harder to learn rotated feature than non-rotated feature. 2022 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), 1–5. https://doi.org/10.1109/AIPR57179.2022.10092224 | |
dc.identifier.uri | https://hdl.handle.net/1805/36328 | |
dc.language.iso | en_US | |
dc.publisher | IEEE | |
dc.relation.isversionof | 10.1109/AIPR57179.2022.10092224 | |
dc.relation.journal | 2022 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) | |
dc.rights | Publisher Policy | |
dc.source | Author | |
dc.subject | deep-learning | |
dc.subject | convolutional neural netowork | |
dc.subject | rotation invariance | |
dc.subject | image processing | |
dc.title | CNN-based network has Network Anisotropy -work harder to learn rotated feature than non-rotated feature | |
dc.type | Conference proceedings |