As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Word embeddings or distributed representations of words in a low dimensional vector space have been shown to capture both syntactic and semantic word relationships. Recently, multiple methods have been proposed to learn good word vector representations from very large text corpora effectively. Such word representations have been used to improve performance in a variety of natural language processing tasks. This work compares multiple methods to learn word embeddings for Latvian language and applies them to part of speech tagging, named entity recognition and dependency parsing tasks achieving state-of-the-art results for Latvian without resorting to any hand crafted and language specific features or resources such as gazetteers.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.