Pairwise Ranking Aggregation by Non-interactive Crowdsourcing with Budget Constraints
Changjiang Cai, Haipei Sun, Boxiang Dong, Bo Zhang, Ting Wang and Wendy Hui Wang
Stevens Institute of Technology, Stevens Institute of Technology, Montclair State University, Stevens Institute of Technology, Lehigh University, Stevens Institute of Technology

Crowdsourced ranking algorithms ask the crowd to compare the objects and infer the full ranking based on the crowdsourced pairwise comparison results. In this paper, we consider the setting in which the task requester is equipped with a limited budget that can afford only a small number of pairwise comparisons. To make the problem more complicated, the crowd may return noisy comparison answers. We propose an approach to obtain a good-quality full ranking from a small number of pairwise preferences in two steps, namely task assignment and result inference. In the task assignment step, we generate pairwise comparison tasks that produce a full ranking with high probability. In the result inference step, based on the transitive property of pairwise comparisons and truth discovery, we design an efficient heuristic algorithm to find the best full ranking from the potentially conflictive pairwise preferences. The experiment results demonstrate the effectiveness and efficiency of our approach.