As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
To address the limitations of flexibility or efficiency of existing prompting paradigms in generating intermediate reasoning steps, this paper proposes an reasoning framework LLM-AS, which innovatively combines the A* search algorithm with the reasoning process of large language models(LLMs). LLM-AS utilizes the efficient exploration capability of the A* algorithm and avoids the redundant exploration of high-cost nodes, which significantly improves the search efficiency and reduces the cost of invoking LLM. Meanwhile, through the self-improve mechanism of LLMs, LLM-AS ensures the quality of the generated solutions while minimizing model interactions. In addition, the flexibility of the A* search algorithm enables LLM-AS to be applicable to diverse thought organization structures, providing more possibilities for handling various tasks. We conducted experiments on two complex tasks, game 24 and 8 puzzle, to compare the accuracy of the existing prompting paradigms and LLM-AS on both gpt-3.5-turbo and gpt-4.0. The experimental results show that LLM-AS effectively improves the ability of LLMs to solve complex tasks.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.