As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Despite the superior performance of large language models to generate natural language texts, it is hard to generate texts with correct logic according to a given task, due to the difficulties for neural models to capture strict logic from free-form texts. In this paper, we propose a novel graph-based language model, Logical-GLM, to extract strict logic from free-form texts and then infuse into language models. Specifically, we first capture information from natural language instructions and construct logical probability graphs that generally describe domains. Next, we generate logical skeletons to guide language model training, infusing domain knowledge into language models. At last, we alternately optimize the searching policy of graphs and language models until convergence. The experimental results show that Logical-GLM is both effective and efficient compared with traditional language models, despite using smaller-scale training data and fewer parameters. Our approach can generate instructional texts with more correct logic owing to the internalized domain knowledge. Moreover, the search of logical graphs reflects the inner mechanism of the language models, which improves the interpretability of black-box models.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.