Hosler, RyanPhillips, TylerYu, XiaoyuanSundar, AgnidevenZou, XukaiLi, Feng2023-02-172023-02-172022-04Hosler, R., Phillips, T., Yu, X., Sundar, A., Zou, X., & Li, F. (2021). Learning Discriminative Features for Adversarial Robustness. 2021 17th International Conference on Mobility, Sensing and Networking (MSN), 303–310. https://doi.org/10.1109/MSN53354.2021.00055978-1-66540-668-0https://hdl.handle.net/1805/31309Deep Learning models have shown incredible image classification capabilities that extend beyond humans. However, they remain susceptible to image perturbations that a human could not perceive. A slightly modified input, known as an Adversarial Example, will result in drastically different model behavior. The use of Adversarial Machine Learning to generate Adversarial Examples remains a security threat in the field of Deep Learning. Hence, defending against such attacks is a studied field of Deep Learning Security. In this paper, we present the Adversarial Robustness of discriminative loss functions. Such loss functions specialize in either inter-class or intra-class compactness. Therefore, generating an Adversarial Example should be more difficult since the decision barrier between different classes will be more significant. We conducted White-Box and Black-Box attacks on Deep Learning models trained with different discriminative loss functions to test this. Moreover, each discriminative loss function will be optimized with and without Adversarial Robustness in mind. From our experimentation, we found White-Box attacks to be effective against all models, even those trained for Adversarial Robustness, with varying degrees of effectiveness. However, state-of-the-art Deep Learning models, such as Arcface, will show significant Adversarial Robustness against Black-Box attacks while paired with adversarial defense methods. Moreover, by exploring Black-Box attacks, we demonstrate the transferability of Adversarial Examples while using surrogate models optimized with different discriminative loss functions.en-USPublisher PolicyAdversarial machine learningDiscriminative Loss FunctionPerturbation methodsDeep learningLearning Discriminative Features for Adversarial RobustnessConference proceedings