Learning Discriminative Features for Adversarial Robustness

dc.contributor.authorHosler, Ryan
dc.contributor.authorPhillips, Tyler
dc.contributor.authorYu, Xiaoyuan
dc.contributor.authorSundar, Agnideven
dc.contributor.authorZou, Xukai
dc.contributor.authorLi, Feng
dc.contributor.departmentComputer and Information Science, School of Scienceen_US
dc.date.accessioned2023-02-17T21:49:26Z
dc.date.available2023-02-17T21:49:26Z
dc.date.issued2022-04
dc.description.abstractDeep Learning models have shown incredible image classification capabilities that extend beyond humans. However, they remain susceptible to image perturbations that a human could not perceive. A slightly modified input, known as an Adversarial Example, will result in drastically different model behavior. The use of Adversarial Machine Learning to generate Adversarial Examples remains a security threat in the field of Deep Learning. Hence, defending against such attacks is a studied field of Deep Learning Security. In this paper, we present the Adversarial Robustness of discriminative loss functions. Such loss functions specialize in either inter-class or intra-class compactness. Therefore, generating an Adversarial Example should be more difficult since the decision barrier between different classes will be more significant. We conducted White-Box and Black-Box attacks on Deep Learning models trained with different discriminative loss functions to test this. Moreover, each discriminative loss function will be optimized with and without Adversarial Robustness in mind. From our experimentation, we found White-Box attacks to be effective against all models, even those trained for Adversarial Robustness, with varying degrees of effectiveness. However, state-of-the-art Deep Learning models, such as Arcface, will show significant Adversarial Robustness against Black-Box attacks while paired with adversarial defense methods. Moreover, by exploring Black-Box attacks, we demonstrate the transferability of Adversarial Examples while using surrogate models optimized with different discriminative loss functions.en_US
dc.eprint.versionAuthor's manuscripten_US
dc.identifier.citationHosler, R., Phillips, T., Yu, X., Sundar, A., Zou, X., & Li, F. (2021). Learning Discriminative Features for Adversarial Robustness. 2021 17th International Conference on Mobility, Sensing and Networking (MSN), 303–310. https://doi.org/10.1109/MSN53354.2021.00055en_US
dc.identifier.issn978-1-66540-668-0en_US
dc.identifier.urihttps://hdl.handle.net/1805/31309
dc.language.isoen_USen_US
dc.publisherIEEE Xploreen_US
dc.relation.isversionof10.1109/MSN53354.2021.00055en_US
dc.relation.journal2021 17th International Conference on Mobility, Sensing and Networking (MSN)en_US
dc.rightsPublisher Policyen_US
dc.sourceAuthoren_US
dc.subjectAdversarial machine learningen_US
dc.subjectDiscriminative Loss Functionen_US
dc.subjectPerturbation methodsen_US
dc.subjectDeep learningen_US
dc.titleLearning Discriminative Features for Adversarial Robustnessen_US
dc.typeConference proceedingsen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Hosler2021Learning-AAM.pdf
Size:
410.59 KB
Format:
Adobe Portable Document Format
Description:
Article
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: