Adversarial Attacks on Deep Temporal Point Process

dc.contributor.authorKhorshidi, Samira
dc.contributor.authorWang, Bao
dc.contributor.authorMohler, George
dc.contributor.departmentComputer and Information Science, School of Science
dc.date.accessioned2024-04-29T11:08:41Z
dc.date.available2024-04-29T11:08:41Z
dc.date.issued2022
dc.description.abstractTemporal point processes have many applications, from crime forecasting to modeling earthquake aftershocks sequences. Due to the flexibility and expressiveness of deep learning, neural network-based approaches have recently shown promise for modeling point process intensities. However, there is a lack of research on the robustness of such models in regards to adversarial attacks and natural shocks to systems. Precisely, while neural point processes may outperform simpler parametric models on in-sample tests, how these models perform when encountering adversarial examples or sharp non-stationary trends remains unknown. Current work proposes several white-box and blackbox adversarial attacks against temporal point processes modeled by deep neural networks. Extensive experiments confirm that predictive performance and parametric modeling of neural point processes are vulnerable to adversarial attacks. Additionally, we evaluate the vulnerability and performance of these models in the presence of non-stationary abrupt changes, using the crimes dataset, during the Covid-19 pandemic, as an example.
dc.eprint.versionAuthor's manuscript
dc.identifier.citationS. Khorshidi, B. Wang and G. Mohler, "Adversarial Attacks on Deep Temporal Point Process," 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA), Nassau, Bahamas, 2022, pp. 1-8, doi: 10.1109/ICMLA55696.2022.10102767
dc.identifier.urihttps://hdl.handle.net/1805/40308
dc.language.isoen_US
dc.publisherIEEE
dc.relation.isversionof10.1109/ICMLA55696.2022.10102767
dc.relation.journal2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)
dc.rightsPublisher Policy
dc.sourceAuthor
dc.subjectDeep learning
dc.subjectCOVID-19
dc.subjectPandemics
dc.subjectPerturbation methods
dc.subjectClosed box
dc.subjectPredictive models
dc.subjectMarket research
dc.subjectPoint process
dc.subjectAdversarial attacks
dc.subjectDeep learning
dc.subjectNonparametric modeling
dc.titleAdversarial Attacks on Deep Temporal Point Process
dc.typeConference proceedings
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Khorshidi2022Adversarial-NSFAAM.pdf
Size:
1.7 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: