How (Not) to Run a Forecasting Competition: Incentives and Efficiency
Rafael Frongillo (University of Colorado Boulder)
Calvin Lab Auditorium
Forecasting competitions, wherein forecasters submit predictions about future events or unseen data points, are an increasingly common way to gather information and identify experts. One of the most prominent competition platforms is Kaggle, which has run machine learning competitions with prizes up to 3 million USD. The most common approach to running such a competition is also the simplest: score each prediction given the outcome of each event (or data point), and pick the forecaster with the highest score as the winner. Perhaps surprisingly, this simple mechanism has poor incentives, especially when the number of events (data points) is small relative to the number of forecasters. Witkowski, et al. (2018) identified this problem and proposed a clever solution, the Event Lotteries Forecasting (ELF) mechanism. Unfortunately, to choose the best forecaster as the winner, ELF still requires a large number of events. This talk will overview the problem, and introduce a new mechanism which achieves robust incentives with far fewer events. Our approach borrows ideas from online machine learning; we will see how the same mechanism solves an open question for online learning from strategic experts.