Rethinking Generative Zero-Shot Learning: An Ensemble Learning Perspective for Recognising Visual Patches
Paper
Paper/Presentation Title | Rethinking Generative Zero-Shot Learning: An Ensemble Learning Perspective for Recognising Visual Patches |
---|---|
Presentation Type | Paper |
Authors | Chen, Zhi, Wang, Sen, Li, Jingjing and Huang, Zi |
Journal or Proceedings Title | Proceedings of the 28th ACM International Conference on Multimedia (MM '20) |
Journal Citation | pp. 3413-3421 |
Number of Pages | 9 |
Year | 2020 |
Publisher | Association for Computing Machinery (ACM) |
Place of Publication | United States |
ISBN | 9781450379885 |
Digital Object Identifier (DOI) | https://doi.org/10.1145/3394171.3413813 |
Web Address (URL) of Paper | https://dl.acm.org/doi/abs/10.1145/3394171.3413813 |
Web Address (URL) of Conference Proceedings | https://dl.acm.org/doi/proceedings/10.1145/3394171 |
Conference/Event | 28th ACM International Conference on Multimedia (MM '20) |
Event Details | 28th ACM International Conference on Multimedia (MM '20) Parent ACM International Conference on Multimedia Delivery In person Event Date 12 to end of 16 Oct 2020 Event Location Seattle, United States Event Web Address (URL) Rank A A A A |
Abstract | Zero-shot learning (ZSL) is commonly used to address the very pervasive problem of predicting unseen classes in fine-grained image classification and other tasks. One family of solutions is to learn synthesised unseen visual samples produced by generative models from auxiliary semantic information, such as natural language descriptions. However, for most of these models, performance suffers from noise in the form of irrelevant image backgrounds. Further, most methods do not allocate a calculated weight to each semantic patch. Yet, in the real world, the discriminative power of features can be quantified and directly leveraged to improve accuracy and reduce computational complexity. To address these issues, we propose a novel framework called multi-patch generative adversarial nets (MPGAN) that synthesises local patch features and labels unseen classes with a novel weighted voting strategy. The process begins by generating discriminative visual features from noisy text descriptions for a set of predefined local patches using multiple specialist generative models. The features synthesised from each patch for unseen classes are then used to construct an ensemble of diverse supervised classifiers, each corresponding to one local patch. A voting strategy averages the probability distributions output from the classifiers and, given that some patches are more discriminative than others, a discrimination-based attention mechanism helps to weight each patch accordingly. Extensive experiments show that MPGAN has significantly greater accuracy than state-of-the-art methods. |
Keywords | generative zero-shot Learning; !ne-grained classi!cation |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4602. Artificial intelligence |
Public Notes | © 2020 Association for Computing Machinery. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in MM '20: Proceedings of the 28th ACM International Conference on Multimedia, https://doi.org/10.1145/3394171.3413813. |
Byline Affiliations | University of Queensland |
University of Electronic Science and Technology of China, China |
https://research.usq.edu.au/item/zyx1v/rethinking-generative-zero-shot-learning-an-ensemble-learning-perspective-for-recognising-visual-patches
Download files
12
total views3
total downloads7
views this month2
downloads this month