As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Nowadays artificial intelligence (AI) has been applied in many high-stake decision-making tasks. The black box AI models which are lack of explainability can cause serious problems in practice. In the justice, an explainable model becomes more and more important. Since tree-based machine learning models are explainable, we propose an explainable legal judgment prediction model using concept trees with collegiate bench mechanism in this paper. A concept tree is constructed to check the classification labels predicted by the original multi-classifier. A revising process is designed to deal with the scenario when the results of the original multi-classifier and the concept trees are conflicted. Meanwhile, the concept trees grow into concept forest because of the existence of arbitration classifiers. The judicial judgment process is simulated, which not only makes the good classification performance with collegiate bench mechanism, but also has the model explanation from the features in the conceptual levels. The experiments validate the validity of our model with both better explainability and better accuracy.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.