From wide to deep: dimension lifting network for parameter-efficient knowledge graph embedding
Article
Article Title | From wide to deep: dimension lifting network for parameter-efficient knowledge graph embedding |
---|---|
ERA Journal ID | 17876 |
Article Category | Article |
Authors | Cai, Borui, Xiang, Yong, Gao, Longxiang, Wu, Di, Zhang, He, Jin, Jiong and Luan, Tom |
Journal Title | IEEE Transactions on Knowledge and Data Engineering |
Number of Pages | 7 |
Year | 2024 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Place of Publication | United States |
ISSN | 1041-4347 |
1558-2191 | |
Digital Object Identifier (DOI) | https://doi.org/DOI:10.1109/TKDE.2024.3437479 |
Web Address (URL) | https://ieeexplore.ieee.org/abstract/document/10636956 |
Abstract | Knowledge graph embedding (KGE) that maps entities and relations into vector representations is essential for downstream applications. Conventional KGE methods require high-dimensional representations to learn the complex structure of knowledge graph, but lead to oversized model parameters. Recent advances reduce parameters by low-dimensional entity representations, while developing techniques (e.g., knowledge distillation or reinvented representation forms) to compensate for reduced dimension. However, such operations introduce complicated computations and model designs that may not benefit large knowledge graphs. To seek a simple strategy to improve the parameter efficiency of conventional KGE models, we take inspiration from that deeper neural networks require exponentially fewer parameters to achieve expressiveness comparable to wider networks for compositional structures. We view all entity representations as a single-layer embedding network, and conventional KGE methods that adopt high-dimensional entity representations equal widening the embedding network to gain expressiveness. To achieve parameter efficiency, we instead propose a deeper embedding network for entity representations, i.e., a narrow entity embedding layer plus a multi-layer dimension lifting network (LiftNet). Experiments on three public datasets show that by integrating LiftNet, four conventional KGE methods with 16-dimensional representations achieve comparable link prediction accuracy as original models that adopt 512-dimensional representations, saving 68.4% to 96.9% parameters. |
Keywords | Knowledge graph embedding; deep neural network; parameter-efficiency; representation learning |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 460506. Graph, social and multimedia data |
4602. Artificial intelligence | |
Public Notes | The accessible file is the accepted version of the paper. Please refer to the URL for the published version. |
Byline Affiliations | Deakin University |
Qilu University of Technology, China | |
School of Mathematics, Physics and Computing | |
CNPIEC KEXIN, China | |
Swinburne University of Technology | |
Xi'an Jiaotong University, China |
https://research.usq.edu.au/item/z8y16/from-wide-to-deep-dimension-lifting-network-for-parameter-efficient-knowledge-graph-embedding
Download files
Accepted Version
From_Wide_to_Deep_Dimension_Lifting_Network_for_Parameter-Efficient_Knowledge_Graph_Embedding.pdf | ||
File access level: Anyone |
8
total views5
total downloads1
views this month1
downloads this month