“The Choice of Sample Split in Out-of-Sample Forecast Evaluation” (joint with with Allan Timmermann” by Peter Hansen
European University Institute
Out-of-sample tests of forecast performance depend on how a given data set is split into estimation and evaluation periods, yet no guidance exists on how to choose the split point. Empirical forecast evaluation results can therefore be difficult to interpret, particularly when several values of the split point might have been considered. When the sample split is viewed as a choice variable, rather than being fixed ex ante, we show that very large size distortions can occur for conventional tests of predictive accuracy.Spurious rejections are most likely to occur with a short evaluation sample, while conversely the power of forecast evaluation tests is strongest with long out-of-sample periods. To deal with size distortions, we propose a test statistic that is robust to the effect of considering multiple sample split points. Empirical applications to predictability of stock returns and inflation demonstrate that out-of-sample forecast evaluation results can critically depend on how the sample split is determined.