Abstract: | Forecasts of probability distributions are needed to support decision making in many applications. The accuracy of predictive distributions should be evaluated by maximising sharpness subject to calibration. Sharpness relates to the concentration of the predictive distributions, while calibration concerns their statistical consistency with the data. This paper focuses on calibration testing. It is important that a calibration test cannot be gamed by forecasts that have been strategically designed to pass the test. The widely used tests of probabilistic calibration for predictive distributions are based on the probability integral transform. Drawing on previous results for quantile prediction, we show that strategic distributional forecasting is a concern for these tests. To address this, we provide a simple extension of one of the tests. We illustrate ideas using simulated data. |