TY - JOUR
T1 - Generating Natural Language from Logic Expressions with Structural Representation
AU - Wu, Xin
AU - Cai, Yi
AU - Lian, Zetao
AU - Leung, Ho Fung
AU - Wang, Tao
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2023
Y1 - 2023
N2 - Incorporating logic reasoning with deep neural networks (DNNs) is an important challenge in machine learning. In this article, we study the problem of converting logical expressions into natural language. In particular, given a sequential logic expression, the goal is to generate its corresponding natural sentence. Since the information in a logic expression often has a hierarchical structure, a sequence-to-sequence baseline struggles to capture the full dependencies between words, and hence it often generates incorrect sentences. To alleviate this problem, we propose a model to convert Structural Logic Expressions into Natural Language (SLEtoNL). SLEtoNL converts sequential logic expressions into structural representation and leverages structural encoders to capture the dependencies between nodes. The quantitative and qualitative analyses demonstrate that our proposed method outperforms the seq2seq model, which is based on the sequential representation, and outperforms strong pretrained language models (e.g., T5, BART, GPT3) with a large margin (28.6 in BLEU3) in out-of-distribution evaluation.
AB - Incorporating logic reasoning with deep neural networks (DNNs) is an important challenge in machine learning. In this article, we study the problem of converting logical expressions into natural language. In particular, given a sequential logic expression, the goal is to generate its corresponding natural sentence. Since the information in a logic expression often has a hierarchical structure, a sequence-to-sequence baseline struggles to capture the full dependencies between words, and hence it often generates incorrect sentences. To alleviate this problem, we propose a model to convert Structural Logic Expressions into Natural Language (SLEtoNL). SLEtoNL converts sequential logic expressions into structural representation and leverages structural encoders to capture the dependencies between nodes. The quantitative and qualitative analyses demonstrate that our proposed method outperforms the seq2seq model, which is based on the sequential representation, and outperforms strong pretrained language models (e.g., T5, BART, GPT3) with a large margin (28.6 in BLEU3) in out-of-distribution evaluation.
KW - graph convolutional networks
KW - logic expressions
KW - Natural language generation
KW - Tree-LSTM
UR - http://www.scopus.com/inward/record.url?scp=85153382571&partnerID=8YFLogxK
U2 - 10.1109/TASLP.2023.3263784
DO - 10.1109/TASLP.2023.3263784
M3 - Article
AN - SCOPUS:85153382571
SN - 2329-9290
VL - 31
SP - 1499
EP - 1510
JO - IEEE/ACM Transactions on Audio Speech and Language Processing
JF - IEEE/ACM Transactions on Audio Speech and Language Processing
ER -