March 13, 2013 – ,
As a complement to offline evaluations using pre-collected or simulated data sets to test the performance of algorithms, crowdsourcing platforms can be used to perform online evaluations. Online evaluations have the added advantage of solving the incompleteness problem where recommended resources that could not be evaluated in an offline setting can now be determined as being relevant or not. Situational judgement tests will be used to determine if resources recommended to a user working on a particular activity (described as the situation of the learner) are relevant to this activity.
Online evaluations using crowdsourcing platforms have been successfully performed to evaluate recommender systems. This approach will now be applied to the domain of recommender systems in e-learning, specifically to the graph-based recommendation of resources.
Goal of Thesis
Keywords: Crowdsourcing, Recommender Systems, Evaluation
Research Area(s): Knowledge & Educational Technologies,
Student: Florian Jomrich