As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Large Pre-Trained Models (LLMs) have reached state-of-the-art performance in various Ntural Language Processing (NLP) application tasks. However, an issue remains these models may confidently output incorrect answers, flawed reasoning, or even entirely hallucinate answers. Truly integrating human feedback and corrections is difficult for LLMs, as the traditional approach of fine-tuning is challenging and compute-intensive for LLMs, and the weights for the best models are often not publicly available. However, the ability to interact with these models in natural language opens up new possibilities for Hybrid AI. In this work, we present a very early exploration of Human-Explanations-Enhanced Prompting (HEEP), an approach that aims to help LLMs learn from human annotators’ input by storing corrected reasonings and retrieving them on the fly to integrate them into prompts given to the model. Our preliminary results support the idea that HEEP could represent an initial step towards cheap alternatives to fine-tuning and developing human-in-the-loop classification methods at scale, encouraging more efficient interactions between human annotators and LLMs.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.