ScholarWorksIndianapolis
  • Communities & Collections
  • Browse ScholarWorks
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Italiano
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Tiếng Việt
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Subject

Browsing by Subject "backdoor triggers"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    GAN-inspired Defense Against Backdoor Attack on Federated Learning Systems
    (IEEE, 2023-09) Sundar, Agnideven Palanisamy; Li, Feng; Zou, Xukai; Gao, Tianchong; Hosler, Ryan; Computer Science, Luddy School of Informatics, Computing, and Engineering
    Federated Learning (FL) provides an opportunity for clients with limited data resources to combine and build better Machine Learning models without compromising their privacy. But aggregating contributions from various clients implies that the errors present in some clients’ resources will also get propagated to all the clients through the combined model. Malicious entities leverage this negative factor to disrupt the normal functioning of the FL system for their gain. A backdoor attack is one such attack where the malicious entities act as clients and implant a small trigger into the global model. Once implanted, the model performs the attacker desired task in the presence of the trigger but acts benignly otherwise. In this paper, we build a GAN-inspired defense mechanism that can detect and defend against the presence of such backdoor triggers. The unavailability of labeled benign and backdoored models has prevented researchers from building detection classifiers. We tackle this problem by utilizing the clients as Generators to construct the required dataset. We place the Discriminator on the server-side, which acts as a backdoored model detecting binary classifier. We experimentally prove the proficiency of our approach with the image-based non-IID datasets, CIFAR10 and CelebA. Our prediction probability-based defense mechanism successfully removes all the influence of backdoors from the global model.
About IU Indianapolis ScholarWorks
  • Accessibility
  • Privacy Notice
  • Copyright © 2025 The Trustees of Indiana University