Leveraging the Invariant Side of Generative Zero-Shot Learning

dc.contributor.authorLi, Jingjing
dc.contributor.authorJing, Mengmeng
dc.contributor.authorLu, Ke
dc.contributor.authorDing, Zhengming
dc.contributor.authorZhu, Lei
dc.contributor.authorHuang, Zi
dc.contributor.departmentElectrical and Computer Engineering, School of Engineering and Technologyen_US
dc.date.accessioned2021-02-12T22:16:22Z
dc.date.available2021-02-12T22:16:22Z
dc.date.issued2019
dc.description.abstractConventional zero-shot learning (ZSL) methods generally learn an embedding, e.g., visual-semantic mapping, to handle the unseen visual samples via an indirect manner. In this paper, we take the advantage of generative adversarial networks (GANs) and propose a novel method, named leveraging invariant side GAN (LisGAN), which can directly generate the unseen features from random noises which are conditioned by the semantic descriptions. Specifically, we train a conditional Wasserstein GANs in which the generator synthesizes fake unseen features from noises and the discriminator distinguishes the fake from real via a minimax game. Considering that one semantic description can correspond to various synthesized visual samples, and the semantic description, figuratively, is the soul of the generated features, we introduce soul samples as the invariant side of generative zero-shot learning in this paper. A soul sample is the meta-representation of one class. It visualizes the most semantically-meaningful aspects of each sample in the same category. We regularize that each generated sample (the varying side of generative ZSL) should be close to at least one soul sample (the invariant side) which has the same class label with it. At the zero-shot recognition stage, we propose to use two classifiers, which are deployed in a cascade way, to achieve a coarse-to-fine result. Experiments on five popular benchmarks verify that our proposed approach can outperform state-of-the-art methods with significant improvements.en_US
dc.eprint.versionFinal published versionen_US
dc.identifier.citationLi, J., Jing, M., Lu, K., Ding, Z., Zhu, L., & Huang, Z. (2019). Leveraging the invariant side of generative zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7402-7411). https://openaccess.thecvf.com/content_CVPR_2019/html/Li_Leveraging_the_Invariant_Side_of_Generative_Zero-Shot_Learning_CVPR_2019_paper.htmlen_US
dc.identifier.urihttps://hdl.handle.net/1805/25226
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/CVPR.2019.00758en_US
dc.relation.journalProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognitionen_US
dc.rightsPublisher Policyen_US
dc.sourceOtheren_US
dc.subjectcomputer visionen_US
dc.subjectimage recognitionen_US
dc.subjectimage representationen_US
dc.titleLeveraging the Invariant Side of Generative Zero-Shot Learningen_US
dc.typeConference proceedingsen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Li2019Leveraging.pdf
Size:
571.03 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: