As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Assigning a numerical value to a temporal expression (TE), known as temporal expression normalization, is a crucial process for tasks like timeline creation and temporal reasoning. Rule-based and classical deep-learning normalization systems lack versatility because they are limited to specific domains and languages, while current Large Language Models (LLMs) solutions are relatively unexplored.
To overcome the current limitations in adaptability, we suggest utilizing five of the latest generative Large Language Models (LLMs) - Mistral 7B, Gemma 7B, Gemma 2B, Phi-2, and Llama-3 8B. We have explored various performance enhancement strategies, including using different prompts, contexts, and training techniques like Neftune. Our proposed models demonstrate the ability to adapt to diverse domains (news and biomedical) and multiple languages (Spanish, English, Italian, French, Portuguese, Catalan, and Basque) simultaneously. These models can handle expressions in various domains and languages, making them more versatile and useful for a wide range of applications. As a result, our approach offers significant performance improvements when compared to existing LLM-based and rule-based solutions for TE normalization and a promising solution for the challenges of temporal normalization.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.