As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Reproducibility is a challenging aspect that considerably affects the quality of most scientific papers. To deal with this, many open frameworks allow to build, test, and benchmark recommender systems for single users. Group recommender systems involve additional tasks w.r.t. those for single users, such as the identification of the groups, or their modeling. While this clearly amplifies the possible reproducibility issues, to date, no framework to benchmark group recommender systems exists. In this work, we enable reproducibility in group recommender systems by extending the LibRec library, which stands out as one of the richest, with more than 70 different recommender algorithms, good performance and several evaluation metrics. Specifically, we include several approaches for all the stages of group recommender systems: group formation, group modeling strategies, and evaluation. To validate our framework, we consider a use-case that compares several group building, recommendation, and group modeling approaches.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.