Sparse-Dyn: Sparse dynamic graph multirepresentation learning via event-based sparse temporal attention network
Article
Article Title | Sparse-Dyn: Sparse dynamic graph multirepresentation learning via event-based sparse temporal attention network |
---|---|
ERA Journal ID | 17950 |
Article Category | Article |
Authors | Pang, Yan, Shan, Ai, Wang, Zhen, Wang, Mengyu, Lin, Jianwei, Zhang, Ji, Huang, Teng and Liu, Chao |
Journal Title | International Journal of Intelligent Systems |
Journal Citation | 37 (11), pp. 8770-8789 |
Number of Pages | 20 |
Year | 2022 |
Publisher | John Wiley & Sons |
Place of Publication | United States |
ISSN | 0884-8173 |
1098-111X | |
Digital Object Identifier (DOI) | https://doi.org/10.1002/int.22967 |
Web Address (URL) | https://onlinelibrary.wiley.com/doi/10.1002/int.22967 |
Abstract | Dynamic graph neural networks (DGNNs) have been widely used in modeling and representation learning of graph structure data. Current dynamic representation learning focuses on either discrete learning which results in temporal information loss, or continuous learning which involves heavy computation. In this study, we proposed a novel DGNN, sparse dynamic (Sparse-Dyn). It adaptively encodes temporal information into a sequence of patches with an equal amount of temporal-topological structure. Therefore, while avoiding using snapshots which cause information loss, it also achieves a finer time granularity, which is close to what continuous networks could provide. In addition, we also designed a lightweight module, Sparse Temporal Transformer, to compute node representations through structural neighborhoods and temporal dynamics. Since the fully connected attention conjunction is simplified, the computation cost is far lower than the current state-of-the-art. Link prediction experiments are conducted on both continuous and discrete graph data sets. By comparing several state-of-the-art graph embedding baselines, the experimental results demonstrate that Sparse-Dyn has a faster inference speed while having competitive performance. |
Keywords | link prediction; adaptive data encoding; dynamic graph neural network; sparse temporal transformer |
Public Notes | File reproduced in accordance with the copyright policy of the publisher/author. |
Byline Affiliations | Guangzhou University, China |
Zhejiang Lab, China | |
Osaka University, Japan | |
San Jose University, United States | |
School of Mathematics, Physics and Computing | |
University of Colorado Denver, United States |
https://research.usq.edu.au/item/z028w/sparse-dyn-sparse-dynamic-graph-multirepresentation-learning-via-event-based-sparse-temporal-attention-network
Download files
63
total views29
total downloads4
views this month2
downloads this month