Leveraging the Invariant Side of Generative Zero-Shot Learning
dc.contributor.author | Li, Jingjing | |
dc.contributor.author | Jing, Mengmeng | |
dc.contributor.author | Lu, Ke | |
dc.contributor.author | Ding, Zhengming | |
dc.contributor.author | Zhu, Lei | |
dc.contributor.author | Huang, Zi | |
dc.contributor.department | Electrical and Computer Engineering, School of Engineering and Technology | en_US |
dc.date.accessioned | 2021-02-12T22:16:22Z | |
dc.date.available | 2021-02-12T22:16:22Z | |
dc.date.issued | 2019 | |
dc.description.abstract | Conventional zero-shot learning (ZSL) methods generally learn an embedding, e.g., visual-semantic mapping, to handle the unseen visual samples via an indirect manner. In this paper, we take the advantage of generative adversarial networks (GANs) and propose a novel method, named leveraging invariant side GAN (LisGAN), which can directly generate the unseen features from random noises which are conditioned by the semantic descriptions. Specifically, we train a conditional Wasserstein GANs in which the generator synthesizes fake unseen features from noises and the discriminator distinguishes the fake from real via a minimax game. Considering that one semantic description can correspond to various synthesized visual samples, and the semantic description, figuratively, is the soul of the generated features, we introduce soul samples as the invariant side of generative zero-shot learning in this paper. A soul sample is the meta-representation of one class. It visualizes the most semantically-meaningful aspects of each sample in the same category. We regularize that each generated sample (the varying side of generative ZSL) should be close to at least one soul sample (the invariant side) which has the same class label with it. At the zero-shot recognition stage, we propose to use two classifiers, which are deployed in a cascade way, to achieve a coarse-to-fine result. Experiments on five popular benchmarks verify that our proposed approach can outperform state-of-the-art methods with significant improvements. | en_US |
dc.eprint.version | Final published version | en_US |
dc.identifier.citation | Li, J., Jing, M., Lu, K., Ding, Z., Zhu, L., & Huang, Z. (2019). Leveraging the invariant side of generative zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7402-7411). https://openaccess.thecvf.com/content_CVPR_2019/html/Li_Leveraging_the_Invariant_Side_of_Generative_Zero-Shot_Learning_CVPR_2019_paper.html | en_US |
dc.identifier.uri | https://hdl.handle.net/1805/25226 | |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.relation.isversionof | 10.1109/CVPR.2019.00758 | en_US |
dc.relation.journal | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition | en_US |
dc.rights | Publisher Policy | en_US |
dc.source | Other | en_US |
dc.subject | computer vision | en_US |
dc.subject | image recognition | en_US |
dc.subject | image representation | en_US |
dc.title | Leveraging the Invariant Side of Generative Zero-Shot Learning | en_US |
dc.type | Conference proceedings | en_US |