Multimodality Information Fusion for Automated Machine Translation
Article
Article Title | Multimodality Information Fusion for Automated Machine Translation |
---|---|
ERA Journal ID | 20983 |
Article Category | Article |
Authors | Li, Lin (Author), Tayir, Turghun (Author), Han, Yifeng (Author), Tao, Xiaohui (Author) and Velasquez, Juan D. (Author) |
Journal Title | Information Fusion |
Journal Citation | 91, pp. 352-363 |
Number of Pages | 12 |
Year | 2023 |
Publisher | Elsevier |
Place of Publication | Netherlands |
ISSN | 1566-2535 |
1872-6305 | |
Digital Object Identifier (DOI) | https://doi.org/10.1016/j.inffus.2022.10.018 |
Web Address (URL) | https://www.sciencedirect.com/science/article/pii/S1566253522001877 |
Abstract | Machine translation is a popular automation approach for translating texts between different languages. Although traditionally it has a strong focus on natural language, images can potentially provide an additional source of information in machine translation. However, there are presently two challenges: (i) the lack of an effective fusion method to handle the triangular-mapping function between image, text, and semantic knowledge; and (ii) the accessibility of large-scale parallel corpus to train a model for generating accurate machine translations. To address these challenges, this work proposes an effective multimodality information fusion method for automated machine translation based on semi-supervised learning. The method fuses multimodality information, texts and images to deliver automated machine translation. Specifically, our objective fuses multimodalities with alignment in a multimodal attention network, which advances the method through the power of mapping text and image features to their semantic information with accuracy. Moreover, a semi-supervised learning method is utilised for its capability in using a small number of parallel corpus for supervised training on the basis of unsupervised training. Conducted on the Multi30k dataset, the experimental results shows the promising performance of our proposed fusion method compared with state-of-the-art approaches. |
Keywords | Multimodal fusion; Machine translation; Multimodal alignment; Semi-supervised learning |
ANZSRC Field of Research 2020 | 460307. Multimodal analysis and synthesis |
460208. Natural language processing | |
461106. Semi- and unsupervised learning | |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Wuhan University of Technology, China |
School of Sciences | |
University of Chile, Chile | |
Institution of Origin | University of Southern Queensland |
Funding source | Australian Research Council (ARC) Grant ID DP220101360 |
https://research.usq.edu.au/item/q7w78/multimodality-information-fusion-for-automated-machine-translation
196
total views4
total downloads1
views this month0
downloads this month