Contrastive Learning Augmented Graph Auto-Encoder

Publication Name

Communications in Computer and Information Science

Abstract

Graph embedding aims to embed the information of graph data into low-dimensional representation space. Prior methods generally suffer from an imbalance of preserving structural information and node features due to their pre-defined inductive biases, leading to unsatisfactory generalization performance. In order to preserve the maximal information, graph contrastive learning (GCL) has become a prominent technique for learning discriminative embeddings. However, in contrast with graph-level embeddings, existing GCL methods generally learn less discriminative node embeddings in a self-supervised way. In this paper, we ascribe above problem to two challenges: 1) graph data augmentations, which are designed for generating contrastive representations, hurt the original semantic information for nodes. 2) the nodes within the same cluster are selected as negative samples. To alleviate these challenges, we propose Contrastive Variational Graph Auto-Encoder (CVGAE). Specifically, we first propose a distribution-dependent regularization to guide the paralleled encoders to generate contrastive representations following similar distributions. Then, we utilize truncated triplet loss, which only selects top-k nodes as negative samples, to avoid over-separate nodes affiliated to the same cluster. Experiments on several real-world datasets show that our model CVGAE advanced performance over all baselines in link prediction, node clustering tasks.

Open Access Status

This publication is not available as open access

Volume

1965 CCIS

First Page

280

Last Page

291

Funding Number

61877051

Funding Sponsor

National Natural Science Foundation of China

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1007/978-981-99-8145-8_22