TY - GEN
T1 - Knowledge-Grounded Dialogue Generation with Term-level De-noising
AU - Zheng, Wen
AU - Milic-Frayling, Natasa
AU - Zhou, Ke
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics
PY - 2021
Y1 - 2021
N2 - Dialogue generation has been improved through injecting knowledge into generative models. However, addition of knowledge through simple selection of sentences or paragraphs is likely to introduce noise and diminish the effectiveness of the generative models. In this paper, we present a novel Knowledge Term Weighting Model (KTWM) that incorporates term-level de-noising of the selected knowledge. KTWM includes a module for generating Simulated Response Vectors (SRVs) and uses SRVs attention distributions with the knowledge embeddings to determine knowledge term weights. Our experiments demonstrate that KTWM, combined with various knowledge selection algorithms, consistently achieves statistically significant improvements over methods without term weighting when applied to two publicly available datasets Wizard of Wikipedia (Wiz) and Holl-E. The results are particularly improved for the Wiz test data with unseen topics, demonstrating the robustness of the KTWM noise-reduction approach.
AB - Dialogue generation has been improved through injecting knowledge into generative models. However, addition of knowledge through simple selection of sentences or paragraphs is likely to introduce noise and diminish the effectiveness of the generative models. In this paper, we present a novel Knowledge Term Weighting Model (KTWM) that incorporates term-level de-noising of the selected knowledge. KTWM includes a module for generating Simulated Response Vectors (SRVs) and uses SRVs attention distributions with the knowledge embeddings to determine knowledge term weights. Our experiments demonstrate that KTWM, combined with various knowledge selection algorithms, consistently achieves statistically significant improvements over methods without term weighting when applied to two publicly available datasets Wizard of Wikipedia (Wiz) and Holl-E. The results are particularly improved for the Wiz test data with unseen topics, demonstrating the robustness of the KTWM noise-reduction approach.
UR - http://www.scopus.com/inward/record.url?scp=85123932665&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85123932665
T3 - Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
SP - 2972
EP - 2983
BT - Findings of the Association for Computational Linguistics
A2 - Zong, Chengqing
A2 - Xia, Fei
A2 - Li, Wenjie
A2 - Navigli, Roberto
PB - Association for Computational Linguistics (ACL)
T2 - Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Y2 - 1 August 2021 through 6 August 2021
ER -