AdCSE: An Adversarial Method for Contrastive Learning of Sentence Embeddings

Published in DASFAA, 2022

Recommended citation: Renhao Li, Lei Duan, Guicai Xie, Shan Xiao and Weipeng Jiang. AdCSE: An Adversarial Method for Contrastive Learning of Sentence Embeddings[C]. International Conference on Database Systems for Advanced Applications, 2022, 165-180. https://link.springer.com/chapter/10.1007/978-3-031-00129-1_11

Due to the impressive results on semantic textual similarity (STS) tasks, unsupervised sentence embedding methods based on contrastive learning have attracted much attention from researchers. Most of these approaches focus on constructing high-quality positives, while only using other in-batch sentences for negatives which are insufficient for training accurate discriminative boundaries. In this paper, we demonstrate that high-quality negative representations introduced by adversarial training help to learn powerful sentence embeddings. We design a novel method named AdCSE for unsupervised sentence embedding. It consists of an untied dual-encoder backbone network for embedding positive sentence pairs and a group of negative adversaries for training hard negatives. These two parts of AdCSE compete against each other mutually in an adversarial way for contrastive learning, obtaining the most expressive sentence representations while achieving an equilibrium. Experiments on 7 STS tasks show the effectiveness of AdCSE. The superiority of AdCSE in constructing high-quality sentence embeddings is also validated by ablation studies and quality analysis of representations.

Download paper here

Recommended citation: Renhao Li, Lei Duan, Guicai Xie, Shan Xiao and Weipeng Jiang. AdCSE: An Adversarial Method for Contrastive Learning of Sentence Embeddings[C]. International Conference on Database Systems for Advanced Applications, 2022, 165-180.