Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion Recognition
Article
Article Title | Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion Recognition |
---|---|
ERA Journal ID | 200608 |
Article Category | Article |
Authors | Latif, Siddique (Author), Rana, Rajib (Author), Khalifa, Sara (Author), Jurdak, Raja (Author), Epps, Julien (Author) and Schuller, Bjorn W. (Author) |
Journal Title | IEEE Transactions on Affective Computing |
Journal Citation | 13 (2), pp. 992-1004 |
Number of Pages | 13 |
Year | 2022 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Place of Publication | United States |
ISSN | 1949-3045 |
Digital Object Identifier (DOI) | https://doi.org/10.1109/TAFFC.2020.2983669 |
Web Address (URL) | https://ieeexplore.ieee.org/document/9052467 |
Abstract | Inspite the emerging importance of Speech Emotion Recognition (SER), the state-of-the-art accuracy is quite low and needs improvement to make commercial applications of SER viable. A key underlying reason for the low accuracy is the scarcity of emotion datasets, which is a challenge for developing any robust machine learning model in general. In this paper, we propose a solution to this problem: a multi-task learning framework that uses auxiliary tasks for which data is abundantly available. We show that utilisation of this additional data can improve the primary task of SER for which only limited labelled data is available. In particular, we use gender identifications and speaker recognition as auxiliary tasks, which allow the use of very large datasets, e.g., speaker classification datasets. To maximise the benefit of multi-task learning, we further use an adversarial autoencoder (AAE) within our framework, which has a strong capability to learn powerful and discriminative features. Furthermore, the unsupervised AAE in combination with the supervised classification networks enables semi-supervised learning which incorporates a discriminative component in the AAE unsupervised training pipeline. The proposed model is rigorously evaluated for categorical and dimensional emotion, and cross-corpus scenarios. Experimental results demonstrate that the proposed model achieves state-of-the-art performance on two publicly available dataset. |
Keywords | speech emotion recognition, multi task learning, representation learning |
Related Output | |
Is part of | Deep Representation Learning for Speech Emotion Recognition |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 460212. Speech recognition |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
This article is part of a UniSQ Thesis by publication. See Related Output. | |
Byline Affiliations | University of Southern Queensland |
University of New South Wales | |
Queensland University of Technology | |
University of New South Wales | |
Imperial College London, United Kingdom | |
Institution of Origin | University of Southern Queensland |
https://research.usq.edu.au/item/q5v01/multi-task-semi-supervised-adversarial-autoencoding-for-speech-emotion-recognition
207
total views11
total downloads4
views this month0
downloads this month