Zehong Wang, Donghua Yu, Shigen Shen, Shichao Zhang, Huawen Liu, Shuang Yao, Maozu Guo
Contrastive learning (CL) has emerged as a powerful approach for self-supervised learning. However, it suffers from sampling bias, which hinders its performance. While the mainstream solutions, hard negative mining (HNM) and supervised CL (SCL), have been proposed to mitigate this critical issue, they do not effectively address graph CL (GCL). To address it, we propose graph positive sampling (GPS) and three contrastive objectives. The former is a novel learning paradigm designed to leverage the inherent properties of graphs for improved GCL models, which utilizes four complementary similarity measurements, including node centrality, topological distance, neighborhood overlapping, and semantic distance, to select positive counterparts for each node...
April 23, 2024: IEEE Transactions on Neural Networks and Learning Systems