首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 45 毫秒
1.
We investigate the added value of combining density forecasts focused on a specific region of support. We develop forecast combination schemes that assign weights to individual predictive densities based on the censored likelihood scoring rule and the continuous ranked probability scoring rule (CRPS) and compare these to weighting schemes based on the log score and the equally weighted scheme. We apply this approach in the context of measuring downside risk in equity markets using recently developed volatility models, including HEAVY, realized GARCH and GAS models, applied to daily returns on the S&P 500, DJIA, FTSE and Nikkei indexes from 2000 until 2013. The results show that combined density forecasts based on optimizing the censored likelihood scoring rule significantly outperform pooling based on equal weights, optimizing the CRPS or log scoring rule. In addition, 99% Value‐at‐Risk estimates improve when weights are based on the censored likelihood scoring rule.  相似文献   

2.
Proper scoring rules are used to assess the out-of-sample accuracy of probabilistic forecasts, with different scoring rules rewarding distinct aspects of forecast performance. Herein, we re-investigate the practice of using proper scoring rules to produce probabilistic forecasts that are ‘optimal’ according to a given score and assess when their out-of-sample accuracy is superior to alternative forecasts, according to that score. Particular attention is paid to relative predictive performance under misspecification of the predictive model. Using numerical illustrations, we document several novel findings within this paradigm that highlight the important interplay between the true data generating process, the assumed predictive model and the scoring rule. Notably, we show that only when a predictive model is sufficiently compatible with the true process to allow a particular score criterion to reward what it is designed to reward, will this approach to forecasting reap benefits. Subject to this compatibility, however, the superiority of the optimal forecast will be greater, the greater is the degree of misspecification. We explore these issues under a range of different scenarios and using both artificially simulated and empirical data.  相似文献   

3.
Approximate Bayesian Computation (ABC) has become increasingly prominent as a method for conducting parameter inference in a range of challenging statistical problems, most notably those characterized by an intractable likelihood function. In this paper, we focus on the use of ABC not as a tool for parametric inference, but as a means of generating probabilistic forecasts; or for conducting what we refer to as ‘approximate Bayesian forecasting’. The four key issues explored are: (i) the link between the theoretical behavior of the ABC posterior and that of the ABC-based predictive; (ii) the use of proper scoring rules to measure the (potential) loss of forecast accuracy when using an approximate rather than an exact predictive; (iii) the performance of approximate Bayesian forecasting in state space models; and (iv) the use of forecasting criteria to inform the selection of ABC summaries in empirical settings. The primary finding of the paper is that ABC can provide a computationally efficient means of generating probabilistic forecasts that are nearly identical to those produced by the exact predictive, and in a fraction of the time required to produce predictions via an exact method.  相似文献   

4.
This paper develops a testing framework for comparing the predictive accuracy of competing multivariate density forecasts with different predictive copulas, focusing on specific parts of the copula support. The tests are framed in the context of the Kullback–Leibler Information Criterion, using (out-of-sample) conditional likelihood and censored likelihood in order to focus the evaluation on the region of interest. Monte Carlo simulations document that the resulting test statistics have satisfactory size and power properties for realistic sample sizes. In an empirical application to daily changes of yields on government bonds of the G7 countries we obtain insights into why the Student-t and Clayton mixture copula outperforms the other copulas considered; mixing in the Clayton copula with the t-copula is of particular importance to obtain high forecast accuracy in periods of jointly falling yields.  相似文献   

5.
This article provides a practical evaluation of some leading density forecast scoring rules in the context of forecast surveys. We analyse the density forecasts of UK inflation obtained from the Bank of England’s Survey of External Forecasters, considering both the survey average forecasts published in the Bank’s quarterly Inflation Report, and the individual survey responses recently made available to researchers by the Bank. The density forecasts are collected in histogram format, and the ranked probability score (RPS) is shown to have clear advantages over other scoring rules. Missing observations are a feature of forecast surveys, and we introduce an adjustment to the RPS, based on the Yates decomposition, to improve its comparative measurement of forecaster performance in the face of differential non-response. The new measure, denoted RPS*, is recommended to analysts of forecast surveys.  相似文献   

6.
To evaluate density forecasts, the applied scoring rule is often arbitrarily chosen. The selection of the scoring rule strongly influences the ranking of forecasts. This paper identifies overconfidence as the main driver for scoring differences. A novel approach to measure overconfidence is proposed. Based on a non-proper scoring rule, the forecasts can be individually adjusted toward a calibrated forecast. Applying the adjustment procedure to the survey of professional forecasters, it can be shown that out-of-sample forecasts can be significantly improved. Also the ranking of the adjusted forecasts becomes less sensitive to the selection of scoring rules.  相似文献   

7.
Verifying probabilistic forecasts for extreme events is a highly active research area because popular media and public opinions are naturally focused on extreme events, and biased conclusions are readily made. In this context, classical verification methods tailored for extreme events, such as thresholded and weighted scoring rules, have undesirable properties that cannot be mitigated, and the well-known continuous ranked probability score (CRPS) is no exception.In this paper, we define a formal framework for assessing the behavior of forecast evaluation procedures with respect to extreme events, which we use to demonstrate that assessment based on the expectation of a proper score is not suitable for extremes. Alternatively, we propose studying the properties of the CRPS as a random variable by using extreme value theory to address extreme event verification. An index is introduced to compare calibrated forecasts, which summarizes the ability of probabilistic forecasts for predicting extremes. The strengths and limitations of this method are discussed using both theoretical arguments and simulations.  相似文献   

8.
Many procedures exist to determine the winner in an election, and many studies have been done to determine conditions under which each of them would work best. Much less attention has been given to the examination of how frequently these procedures tend to produce the same winner. The focus of the current study is to examine the likelihood that weighted scoring rules and weighted scoring elimination rules tend to always elect the same winner in three candidate elections.  相似文献   

9.
Quantiles as optimal point forecasts   总被引:1,自引:0,他引:1  
Loss functions play a central role in the theory and practice of forecasting. If the loss function is quadratic, the mean of the predictive distribution is the unique optimal point predictor. If the loss is symmetric piecewise linear, any median is an optimal point forecast. Quantiles arise as optimal point forecasts under a general class of economically relevant loss functions, which nests the asymmetric piecewise linear loss, and which we refer to as generalized piecewise linear (GPL). The level of the quantile depends on a generic asymmetry parameter which reflects the possibly distinct costs of underprediction and overprediction. Conversely, a loss function for which quantiles are optimal point forecasts is necessarily GPL. We review characterizations of this type in the work of Thomson, Saerens and Komunjer, and relate to proper scoring rules, incentive-compatible compensation schemes and quantile regression. In the empirical part of the paper, the relevance of decision theoretic guidance in the transition from a predictive distribution to a point forecast is illustrated using the Bank of England’s density forecasts of United Kingdom inflation rates, and probabilistic predictions of wind energy resources in the Pacific Northwest.  相似文献   

10.
Least-squares forecast averaging   总被引:2,自引:0,他引:2  
This paper proposes forecast combination based on the method of Mallows Model Averaging (MMA). The method selects forecast weights by minimizing a Mallows criterion. This criterion is an asymptotically unbiased estimate of both the in-sample mean-squared error (MSE) and the out-of-sample one-step-ahead mean-squared forecast error (MSFE). Furthermore, the MMA weights are asymptotically mean-square optimal in the absence of time-series dependence. We show how to compute MMA weights in forecasting settings, and investigate the performance of the method in simple but illustrative simulation environments. We find that the MMA forecasts have low MSFE and have much lower maximum regret than other feasible forecasting methods, including equal weighting, BIC selection, weighted BIC, AIC selection, weighted AIC, Bates–Granger combination, predictive least squares, and Granger–Ramanathan combination.  相似文献   

11.
This paper proposes and analyses the Kullback–Leibler information criterion (KLIC) as a unified statistical tool to evaluate, compare and combine density forecasts. Use of the KLIC is particularly attractive, as well as operationally convenient, given its equivalence with the widely used Berkowitz likelihood ratio test for the evaluation of individual density forecasts that exploits the probability integral transforms. Parallels with the comparison and combination of point forecasts are made. This and related Monte Carlo experiments help draw out properties of combined density forecasts. We illustrate the uses of the KLIC in an application to two widely used published density forecasts for UK inflation, namely the Bank of England and NIESR ‘fan’ charts.  相似文献   

12.
We consider a method for producing multivariate density forecasts that satisfy moment restrictions implied by economic theory, such as Euler conditions. The method starts from a base forecast that might not satisfy the theoretical restrictions and forces it to satisfy the moment conditions using exponential tilting. Although exponential tilting has been considered before in a Bayesian context (Robertson et al. 2005), our main contributions are: (1) to adapt the method to a classical inferential context with out-of-sample evaluation objectives and parameter estimation uncertainty; and (2) to formally discuss the conditions under which the method delivers improvements in forecast accuracy. An empirical illustration which incorporates Euler conditions into forecasts produced by Bayesian vector autoregressions shows that the improvements in accuracy can be sizable and significant.  相似文献   

13.
Restricted maximum likelihood (REML) estimation has recently been shown to provide less biased estimates in autoregressive series. A simple weighted least squares approximate REML procedure has been developed that is particularly useful for vector autoregressive processes. Here, we compare the forecasts of such processes using both the standard ordinary least squares (OLS) estimates and the new approximate REML estimates. Forecasts based on the approximate REML estimates are found to provide a significant improvement over those obtained using the standard OLS estimates.  相似文献   

14.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

15.
We consider the properties of weighted linear combinations of prediction models, or linear pools, evaluated using the log predictive scoring rule. Although exactly one model has limiting posterior probability, an optimal linear combination typically includes several models with positive weights. We derive several interesting results: for example, a model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with six prediction models. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools.  相似文献   

16.
We propose new methods for evaluating predictive densities. The methods include Kolmogorov–Smirnov and Cramér–von Mises-type tests for the correct specification of predictive densities robust to dynamic mis-specification. The novelty is that the tests can detect mis-specification in the predictive densities even if it appears only over a fraction of the sample, due to the presence of instabilities. Our results indicate that our tests are well sized and have good power in detecting mis-specification in predictive densities, even when it is time-varying. An application to density forecasts of the Survey of Professional Forecasters demonstrates the usefulness of the proposed methodologies.  相似文献   

17.
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small‐sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey‐based probability forecasts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
The study considers the significance of weight selection for weighted scoring rules on three alternatives. Most research previously done in this area has been based on the assumption of the impartial culture condition. Intuition suggests that if preferences of individuals in a large group are at all distant or different from the impartial culture condition, then the weights selected for a weighted scoring rule become unimportant. It would seem likely in such a situation that one of the alternatives should be a clear front runner and win under almost any weighted scoring rule. A measure of social homogeneity is used in the study to represent the distance between a given set of individuals' preferences and the impartial culture condition. Based on computer simulation analysis, the results indicate that specific weight selection definitely becomes less important as the level of social homogeneity increases. However, the specific weight selection is still found to be of importance for situations with the greatest homogeneity that represent preferences the farthest distance from the condition of impartial culture.  相似文献   

19.
Efficient high-dimensional importance sampling   总被引:1,自引:0,他引:1  
The paper describes a simple, generic and yet highly accurate efficient importance sampling (EIS) Monte Carlo (MC) procedure for the evaluation of high-dimensional numerical integrals. EIS is based upon a sequence of auxiliary weighted regressions which actually are linear under appropriate conditions. It can be used to evaluate likelihood functions and byproducts thereof, such as ML estimators, for models which depend upon unobservable variables. A dynamic stochastic volatility model and a logit panel data model with unobserved heterogeneity (random effects) in both dimensions are used to provide illustrations of EIS high numerical accuracy, even under small number of MC draws. MC simulations are used to characterize the finite sample numerical and statistical properties of EIS-based ML estimators.  相似文献   

20.
All quantitative evaluations of fiscal sustainability that include the effects of population ageing must utilize demographic forecasts. It is well known that such forecasts are uncertain, and some studies have taken that into account by using stochastic population projections jointly with economic models. We develop this approach further by introducing regular demographic forecast revisions that are embedded in stochastic population projections. This allows us to separate, for each demographic outcome and under different policy rules, the expected and realized effects of population ageing on public finances. In our Finnish application, demographic uncertainty produces a considerable sustainability risk. We consider policies that reduce the likelihood of getting highly indebted and demonstrate that, although demographic forecasts are uncertain, they contain enough information to be useful in forward-looking policy rules.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号