posted on 2019-08-21, 00:00authored byYoram Bachrach, Ian A. Kash, Peter Key, Joel Oren
We analyze human behavior in crowdsourcing contests using an all-pay auction model where all participants exert effort, but only the highest bidder receives the reward. We let workers sourced from Amazon Mechanical Turk participate in an all-pay auction, and contrast the game theoretic equilibrium with the choices of the humans participants. We examine how people competing in the contest learn and adapt their bids, comparing their behavior to well-established online learning algorithms in a novel approach to quantifying the performance of humans as learners. For the crowdsourcing contest designer, our results show that a bimodal distribution of effort should be expected, with some very high effort and some very low effort, and that humans have a tendency to overbid. Our results suggest that humans are weak learners in this setting, so it may be important to educate participants about the strategic implications of crowdsourcing contests.
History
Publisher Statement
SPRINGER: Post print version of article may differ from published version. The final publication is available at springerlink.com; DOI:10.1007/s10458-019-09402-4
Citation
Bachrach, Y., Kash, I. A., Key, P., & Oren, J. (2019). Strategic behavior and learning in all-pay auctions: an empirical study using crowdsourced data. Autonomous Agents and Multi-Agent Systems, 33(1-2), 192-215. doi:10.1007/s10458-019-09402-4