Crowdsourcing as an Online Evaluation Method for Recommender Systems for E-learning

March 13, 2013 – ,


As a complement to offline evaluations using pre-collected or simulated data sets to test the performance of algorithms, crowdsourcing platforms can be used to perform online evaluations. Online evaluations have the added advantage of solving the incompleteness problem where recommended resources that could not be evaluated in an offline setting can now be determined as being relevant or not. Situational judgement tests will be used to determine if resources recommended to a user working on a particular activity (described as the situation of the learner) are relevant to this activity.

Online evaluations using crowdsourcing platforms have been successfully performed to evaluate recommender systems. This approach will now be applied to the domain of recommender systems in e-learning, specifically to the graph-based recommendation of resources.

Goal of Thesis

  • The goal of this thesis is to analyze and determine how best crowdsourcing can be applied to evaluate ranking of learning resources considering the current learning goal or task of the learner.
  • Crowdsourcing platforms such as Amazon Mechanical Turk, CrowdFlower or Micro Workers will be investigated and analysed according to their suitability for our tasks.
  • Offline evaluation results of an algorithm will be compared to the results gained from the crowdsourced online evaluations.
  • Improvements to the algorithm evaluated will be proposed.

Keywords: Crowdsourcing, Recommender Systems, Evaluation

Research Area(s): Knowledge & Educational Technologies,

Tutor: ,

Student: Florian Jomrich

Completed Theses