Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling

Yuanhang Yang, shiyi qi, Cuiyun Gao, Zenglin Xu, Yulan He, Qifan Wang, Chuanyi Liu

Research output: Working paper/PreprintPreprint

45 Downloads (Pure)

Abstract

Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI). These models generally perform cross-attention over input pairs, leading to prohibitive computational costs. Recent studies propose dual-encoder and late interaction architectures for faster computation. However, the balance between the expressive of cross-attention and computation speedup still needs better coordinated. To this end, this paper introduces a novel paradigm MixEncoder for efficient sentence pair modeling. MixEncoder involves a light-weight cross-attention mechanism. It conducts query encoding only once while modeling the query-candidate interaction in parallel. Extensive experiments conducted on four tasks demonstrate that our MixEncoder can speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models.
Original languageEnglish
Publication statusPublished - 11 Oct 2022

Keywords

  • cs.CL
  • cs.AI

Fingerprint

Dive into the research topics of 'Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling'. Together they form a unique fingerprint.

Cite this