Strategic Behavior and Learning In All-Pay Auctions: An Empirical Study Using Crowdsourced Data
journal contributionposted on 21.08.2019 by Yoram Bachrach, Ian A. Kash, Peter Key, Joel Oren
Any type of content formally published in an academic journal, usually following a peer-review process.
We analyze human behavior in crowdsourcing contests using an all-pay auction model where all participants exert effort, but only the highest bidder receives the reward. We let workers sourced from Amazon Mechanical Turk participate in an all-pay auction, and contrast the game theoretic equilibrium with the choices of the humans participants. We examine how people competing in the contest learn and adapt their bids, comparing their behavior to well-established online learning algorithms in a novel approach to quantifying the performance of humans as learners. For the crowdsourcing contest designer, our results show that a bimodal distribution of effort should be expected, with some very high effort and some very low effort, and that humans have a tendency to overbid. Our results suggest that humans are weak learners in this setting, so it may be important to educate participants about the strategic implications of crowdsourcing contests.