Investigating Crowdsourcing as an Evaluation Method for TEL Recommenders
Key: EJS13-1
Author: Mojisola Erdt, Florian Jomrich, Katja Schüler, Christoph Rensing
Date: September 2013
Kind: In proceedings
Publisher: CEUR Workshop Proceeding Series
Book title: Proceedings of ECTEL meets ECSCW 2013, the Workshop on Collaborative Technologies for Working and Learning
Keywords: resource recommendation, evaluation, crowdsourcing
Abstract: Offline evaluations using historical data off er a fast and repeatable way to evaluate TEL recommender systems. However, this is only possible if historical datasets contain all particular information needed by the recommendation algorithm. Another challenge is users must have indicated interest in the recommended resource in the historical data collected for a resource to be evaluated as relevant. This however does not mean the user would not be interested in this newly recommended resource. User experiments help to complement offline evaluations but due to the effort and costs of performing these experiments, very few are conducted. Crowdsourcing is a solution to this challenge as it gives access to sufficient willing users. This paper investigates the evaluation of a graph-based recommender system for TEL using crowdsourcing. Initial results show that crowdsourcing can indeed be used as an evaluation method for TEL recommender systems.
View Full paper (PDF) | Download Full paper (PDF)
Official URL

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, not withstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.