Impact of Task Recommendation Systems in Crowdsourcing Platforms
Key: BHSR17
Author: Kathrin Borchert, Matthias Hirth, Steffen Schnitzer, Christoph Rensing
Date: August 2017
Kind: In proceedings
Book title: Proceedings of Workshop on Responsible Recommendation (FATREC’17)
Abstract: Commercial crowdsourcing platforms accumulate hundreds of thousand of tasks with a wide range of different rewards, durations, and skill requirements. This makes it difficult for workers to find tasks that match their preferences and their skill set. As a consequence, recommendation systems for matching tasks and workers gain more and more importance. In this work we have a look on how these recommendation systems may influence different fairness aspects for workers like the success rate and the earnings. To draw generalizable conclusions, we use a simple simulation model that allows us to consider different types of crowdsourcing platforms, workers, and tasks in the evaluation.We show that even simple recommendation systems lead to improvements for most platform users. However, our results also indicate and shall raise the awareness that a small fraction of users is also negatively affected by those systems.
View Full paper (PDF) | Download Full paper (PDF)
Official URL

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, not withstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.