As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Data-driven AI systems can make the right decisions for the wrong reasons, which can lead to irresponsible behavior. The rationale of such machine learning models can be evaluated and improved using a previously introduced hybrid method. This method, however, was tested using synthetic data under ideal circumstances, whereas labelled datasets in the legal domain are usually relatively small and often contain missing facts or inconsistencies. In this paper, we therefore investigate rationales under such imperfect conditions. We apply the hybrid method to machine learning models that are trained on court cases, generated from a structured representation of Article 6 of the ECHR, as designed by legal experts. We first evaluate the rationale of our models, and then improve it by creating tailored training datasets. We show that applying the rationale evaluation and improvement method can yield relevant improvements in terms of both performance and soundness of rationale, even under imperfect conditions.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.