Multitask Learning From Augmented Auxiliary Data for Improving Speech Emotion Recognition
Article
Article Title | Multitask Learning From Augmented Auxiliary Data for Improving Speech Emotion Recognition |
---|---|
ERA Journal ID | 200608 |
Article Category | Article |
Authors | Latif, Siddique, Rana, Rajib, Khalifa, Sara, Jurdak, Raja and Schuller, Bjorn W. |
Journal Title | IEEE Transactions on Affective Computing |
Journal Citation | 14 (4), pp. 3164-3176 |
Number of Pages | 13 |
Year | 2023 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Place of Publication | United States |
ISSN | 1949-3045 |
Digital Object Identifier (DOI) | https://doi.org/10.1109/TAFFC.2022.3221749 |
Web Address (URL) | https://ieeexplore.ieee.org/document/9947296 |
Abstract | Despite the recent progress in speech emotion recognition (SER), state-of-the-art systems lack generalisation across different conditions. A key underlying reason for poor generalisation is the scarcity of emotion datasets, which is a significant roadblock to designing robust machine learning (ML) models. Recent works in SER focus on utilising multitask learning (MTL) methods to improve generalisation by learning shared representations. However, most of these studies propose MTL solutions with the requirement of meta labels for auxiliary tasks, which limits the training of SER systems. This paper proposes an MTL framework (MTL-AUG) that learns generalised representations from augmented data. We utilise augmentation-type classification and unsupervised reconstruction as auxiliary tasks, which allow training SER systems on augmented data without requiring any meta labels for auxiliary tasks. The semi-supervised nature of MTL-AUG allows for the exploitation of the abundant unlabelled data to further boost the performance of SER. We comprehensively evaluate the proposed framework in the following settings: (1) within corpus, (2) cross-corpus and cross-language, (3) noisy speech, (4) and adversarial attacks. Our evaluations using the widely used IEMOCAP, MSP-IMPROV, and EMODB datasets show improved results compared to existing state-of-the-art methods. |
Keywords | Australia; Convolutional neural networks; Data models; Emotion recognition; Multi task learning; Noise measurement; representation learning; speech emotion recognition; Task analysis; Training |
Related Output | |
Is part of | Deep Representation Learning for Speech Emotion Recognition |
ANZSRC Field of Research 2020 | 461106. Semi- and unsupervised learning |
461103. Deep learning | |
Public Notes | File reproduced in accordance with the copyright policy of the publisher/author. |
This article is part of a UniSQ Thesis by publication. See Related Output. | |
Byline Affiliations | University of Southern Queensland |
Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia | |
Queensland University of Technology | |
Imperial College London, United Kingdom | |
University of Augsburg, Germany |
https://research.usq.edu.au/item/yzvz0/multitask-learning-from-augmented-auxiliary-data-for-improving-speech-emotion-recognition
Download files
51
total views30
total downloads0
views this month0
downloads this month