- Browse by Subject
Browsing by Subject "Perturbation methods"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Adversarial Attacks on Deep Temporal Point Process(IEEE, 2022) Khorshidi, Samira; Wang, Bao; Mohler, George; Computer Science, Luddy School of Informatics, Computing, and EngineeringTemporal point processes have many applications, from crime forecasting to modeling earthquake aftershocks sequences. Due to the flexibility and expressiveness of deep learning, neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the robustness of such models in regards to adversarial attacks and natural shocks to systems. Precisely, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. Current work proposes several white-box and blackbox adversarial attacks against temporal point processes modeled by deep neural networks. Extensive experiments confirm that predictive performance and parametric modeling of neural point processes are vulnerable to adversarial attacks. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes dataset, during the Covid-19 pandemic, as an example.Item Learning Discriminative Features for Adversarial Robustness(IEEE Xplore, 2022-04) Hosler, Ryan; Phillips, Tyler; Yu, Xiaoyuan; Sundar, Agnideven; Zou, Xukai; Li, Feng; Computer and Information Science, School of ScienceDeep Learning models have shown incredible image classification capabilities that extend beyond humans. However, they remain susceptible to image perturbations that a human could not perceive. A slightly modified input, known as an Adversarial Example, will result in drastically different model behavior. The use of Adversarial Machine Learning to generate Adversarial Examples remains a security threat in the field of Deep Learning. Hence, defending against such attacks is a studied field of Deep Learning Security. In this paper, we present the Adversarial Robustness of discriminative loss functions. Such loss functions specialize in either inter-class or intra-class compactness. Therefore, generating an Adversarial Example should be more difficult since the decision barrier between different classes will be more significant. We conducted White-Box and Black-Box attacks on Deep Learning models trained with different discriminative loss functions to test this. Moreover, each discriminative loss function will be optimized with and without Adversarial Robustness in mind. From our experimentation, we found White-Box attacks to be effective against all models, even those trained for Adversarial Robustness, with varying degrees of effectiveness. However, state-of-the-art Deep Learning models, such as Arcface, will show significant Adversarial Robustness against Black-Box attacks while paired with adversarial defense methods. Moreover, by exploring Black-Box attacks, we demonstrate the transferability of Adversarial Examples while using surrogate models optimized with different discriminative loss functions.