登录    注册    忘记密码

详细信息

A multi-view contrastive learning for heterogeneous network embedding  ( SCI-EXPANDED收录)   被引量:2

文献类型:期刊文献

英文题名:A multi-view contrastive learning for heterogeneous network embedding

作者:Li, Qi[1];Chen, Wenping[1];Fang, Zhaoxi[1];Ying, Changtian[1];Wang, Chen[2]

机构:[1]Shaoxing Univ, Shaoxing 312000, Zhejiang, Peoples R China;[2]Chongqing Univ, Chongqing 400030, Peoples R China

年份:2023

卷号:13

期号:1

外文期刊名:SCIENTIFIC REPORTS

收录:SCI-EXPANDED(收录号:WOS:000984494600032)、、Scopus(收录号:2-s2.0-85154063214)、WOS

基金:AcknowledgementsThis work was supported by Natural Sciences Foundation of Zhejiang Province under Grant No. LY22F020003, National Natural Science Foundation of China under Grant No. 62002226 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LHQ20F020001.

语种:英文

外文摘要:Graph contrastive learning has been developed to learn discriminative node representations on homogeneous graphs. However, it is not clear how to augment the heterogeneous graphs without substantially altering the underlying semantics or how to design appropriate pretext tasks to fully capture the rich semantics preserved in heterogeneous information networks (HINs). Moreover, early investigations demonstrate that contrastive learning suffer from sampling bias, whereas conventional debiasing techniques (e.g., hard negative mining) are empirically shown to be inadequate for graph contrastive learning. How to mitigate the sampling bias on heterogeneous graphs is another important yet neglected problem. To address the aforementioned challenges, we propose a novel multi-view heterogeneous graph contrastive learning framework in this paper. We use metapaths, each of which depicts a complementary element of HINs, as the augmentation to generate multiple subgraphs (i.e., multi-views), and propose a novel pretext task to maximize the coherence between each pair of metapath-induced views. Furthermore, we employ a positive sampling strategy to explicitly select hard positives by jointly considering semantics and structures preserved on each metapath view to alleviate the sampling bias. Extensive experiments demonstrate MCL consistently outperforms state-of-the-art baselines on five real-world benchmark datasets and even its supervised counterparts in some settings.

参考文献:

正在载入数据...

版权所有©绍兴文理学院 重庆维普资讯有限公司 渝B2-20050021-8
渝公网安备 50019002500408号 违法和不良信息举报中心