"Calibeating": Beating Forecasters at Their Own Game
יום רביעי 07.05 11:30 - 12:30
- Game Theory Seminar
-
Bloomfield 424
Joint work with Dean Foster.
Abstract:
In order to identify expertise, forecasters should not be tested by their
calibration score, which can always be made arbitrarily small, but rather
by their Brier score. The Brier score is the sum of the calibration score
and the refinement score; the latter measures how good the sorting into
bins with the same forecast is, and thus attests to "expertise." This
raises the question of whether one can gain calibration without losing
expertise, which we refer to as "calibeating." We provide an easy way to
calibeat any forecast, by a deterministic online procedure. We moreover
show that calibeating can be achieved by a stochastic procedure that is
itself calibrated, and then extend the results to simultaneously
calibeating multiple procedures, and to deterministic procedures that are
continuously calibrated.
http://www.ma.huji.ac.il/hart/abs/calib-beat.html