How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks?

If you need an accessible version of this item, please email your request to digschol@iu.edu so that they may create one and provide it to you.
Date
2022-08
Language
American English
Embargo Lift Date
Committee Members
Degree
Degree Year
Department
Grantor
Journal Title
Journal ISSN
Volume Title
Found At
Springer
Abstract

In recent years, it has been shown that, compared to other contemporary machine learning models, graph convolutional networks (GCNs) achieve superior performance on the node classification task. However, two potential issues threaten the robustness of GCNs, label scarcity and adversarial attacks. .Intensive studies aim to strengthen the robustness of GCNs from three perspectives, the self-supervision-based method, the adversarial-based method, and the detection-based method. Yet, all of the above-mentioned methods can barely handle both issues simultaneously. In this paper, we hypothesize noisy supervision as a kind of self-supervised learning method and then propose a novel Bayesian graph noisy self-supervision model, namely GraphNS, to address both issues. Extensive experiments demonstrate that GraphNS can significantly enhance node classification against both label scarcity and adversarial attacks. This enhancement proves to be generalized over four classic GCNs and is superior to the competing methods across six public graph datasets.

Description
item.page.description.tableofcontents
item.page.relation.haspart
Cite As
Zhuang, J., & Hasan, M. A. (2022). How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks? Neural Processing Letters, 54(4), 2997–3018. https://doi.org/10.1007/s11063-022-10750-8
ISSN
Publisher
Series/Report
Sponsorship
Major
Extent
Identifier
Relation
Journal
Neural Processing Letters
Source
Author
Alternative Title
Type
Article
Number
Volume
Conference Dates
Conference Host
Conference Location
Conference Name
Conference Panel
Conference Secretariat Location
Version
Author's manuscript
Full Text Available at
This item is under embargo {{howLong}}