As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
The analysis and individual interpretation of hepatitis serology test results is a complex task in laboratory medicine, requiring either experienced physicians or specialized expert systems. This study explores fine-tuning a large language model (LLM) for hepatitis serology interpretation using a single graphics processing unit (GPU). A custom dataset based on the Hepaxpert expert system was used to train the LLM. Fine-tuning was performed on an Nvidia RTX 6000 Ada GPU via torchtune. The fine-tuned LLM showed significant performance improvements over the base model when compared to Hepaxpert using the METEOR algorithm. The findings highlight the potential of LLMs in enhancing medical expert systems as well as the significance of domain-specific fine-tuning.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.