Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey

dc.contributor.authorWang, Zhilin
dc.contributor.authorKang, Qiao
dc.contributor.authorZhang, Xinyi
dc.contributor.authorHu, Qin
dc.contributor.departmentComputer and Information Science, School of Science
dc.date.accessioned2023-11-17T22:16:09Z
dc.date.available2023-11-17T22:16:09Z
dc.date.issued2022-04
dc.description.abstractAdvances in distributed machine learning can empower future communications and networking. The emergence of federated learning (FL) has provided an efficient framework for distributed machine learning, which, however, still faces many security challenges. Among them, model poisoning attacks have a significant impact on the security and performance of FL. Given that there have been many studies focusing on defending against model poisoning attacks, it is necessary to survey the existing work and provide insights to inspire future research. In this paper, we first classify defense mechanisms for model poisoning attacks into two categories: evaluation methods for local model updates and aggregation methods for the global model. Then, we analyze some of the existing defense strategies in detail. We also discuss some potential challenges and future research directions. To the best of our knowledge, we are the first to survey defense methods for model poisoning attacks in FL.
dc.eprint.versionAuthor's manuscript
dc.identifier.citationWang, Z., Kang, Q., Zhang, X., & Hu, Q. (2022). Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey. 2022 IEEE Wireless Communications and Networking Conference (WCNC), 548–553. https://doi.org/10.1109/WCNC51071.2022.9771619
dc.identifier.urihttps://hdl.handle.net/1805/37131
dc.language.isoen_US
dc.publisherIEEE
dc.relation.isversionof10.1109/WCNC51071.2022.9771619
dc.relation.journal2022 IEEE Wireless Communications and Networking Conference (WCNC)
dc.rightsPublisher Policy
dc.sourceAuthor
dc.subjectFederated learning
dc.subjectsecurity
dc.subjectmodel poisoning attacks
dc.subjectdefense
dc.titleDefense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey
dc.typeConference proceedings
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Wang2022Defense-NSFAAM.pdf
Size:
160.56 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: