As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
The scale of datasets and knowledge bases is relatively limited in the field of Traditional Chinese Medicine (TCM). To efficiently construct and supplement these knowledge repositories while expanding the scale of datasets, this study proposes an optimized UniLM natural language processing model. Initially, we developed a UniLM model based on transfer learning. Secondly, During the training process, we introduced minor perturbations into the embedding layer to generate adversarial samples, thereby enhancing the model’s generalization capabilities through adversarial training. Finally, we employed a five-fold cross-validation method to validate and evaluate the model. The model achieved an average recall of 0.977, an average accuracy of 0.973, and an average F1 score of 0.975. The optimized UniLM model exhibits high robustness and strong generalization capabilities. These attributes contribute to improved performance in generating questions in the TCM field as well as in the comprehensive utilization of textual data.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.