As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Social media offers a rich source of real-time health data, including potential vaccine reactions. However, extracting meaningful insights is challenging due to the noisy nature of social media content. This paper explores using large language models (LLMs) and prompt engineering to detect personal mentions of vaccine reactions. Different prompting strategies were evaluated on two LLM models (GPT-3.5 and GPT-4) using Reddit data focused on shingles (zoster) vaccines. Zero-shot and few-shot learning approaches with both standard and chain-of-thought prompts were compared. The findings demonstrate that GPT-based models with carefully crafted chain-of-thought prompts could identify the relevant social media posts. Few-shot learning helped GPT4 models to identify more of the marginal cases, although less precisely. The use of LLMs for classification with lightweight supervised pretrained language models (PLMs) found that PLMs outperform LLMs. However, a potential benefit in using LLMs to help identify records for training PLMs was revealed, especially to eliminate false negatives, and LLMs could be used as classifiers when insufficient data exists to train a PLM.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.