As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Most automated hate speech detection models rely on human annotations for training and evaluation. Logic and research indicate that people who belong to groups targeted by hate speech are better at identifying it, often due to their increased familiarity with the topic and associated hate speech terminology. However, most hate speech annotation practices overlook this issue, and hence the labels produced tend to have a reduced accuracy. In this paper, we describe an approach where the text to be annotated is supplemented with background semantics, to expose the meaning of hate speech terminology that is less likely to be known to general annotators. We test the impact of this approach by measuring change in inter-annotator agreement, before and after introducing semantics, between two groups of annotators; those who belong to the target group of hate speech, and those who are not. Our experiments show that infusing text with semantic background increases inter-annotator agreement by up to 11.3% on average, aligning the annotations from annotators who do not belong to the target groups with those from the target groups.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.