Seeing the wood for the trees: A contrastive regularization method for the low-resource Knowledge Base Question Answering

Publication Name

Findings of the Association for Computational Linguistics: NAACL 2022 - Findings

Abstract

Given a context knowledge base (KB) and a corresponding question, the Knowledge Base Question Answering task aims to retrieve correct answer entities from this KB. Despite sophisticated retrieval algorithms, the impact of the low-resource (incomplete) KB is not fully exploited, where contributing components (i.e. key entities and/or relations) may be absent for question answering. To effectively address this problem, we propose a contrastive regularization based method, which is motivated by the learn-by-analogy capability from human readers. Specifically, the proposed work includes two major modules: the knowledge extension and sMoCo module. The former aims at exploiting the latent knowledge from the context KB and generating auxiliary information in the form of question-answer pairs. The later module utilizes those additional pairs and applies the contrastive regularization to learn informative representations, that making hard positive pairs attracted and hard negative pairs separated. Empirically, we achieved the state-of-the-art performance on the WebQuestionsSP dataset and the effectiveness of proposed modules is also evaluated.

Open Access Status

This publication is not available as open access

First Page

1085

Last Page

1094

Funding Number

DP210101426

Funding Sponsor

Australian Research Council

This record is in the process of being updated. Please contact us for more information.

Share

COinS