首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 686 毫秒
1.
This article provides a practical evaluation of some leading density forecast scoring rules in the context of forecast surveys. We analyse the density forecasts of UK inflation obtained from the Bank of England’s Survey of External Forecasters, considering both the survey average forecasts published in the Bank’s quarterly Inflation Report, and the individual survey responses recently made available to researchers by the Bank. The density forecasts are collected in histogram format, and the ranked probability score (RPS) is shown to have clear advantages over other scoring rules. Missing observations are a feature of forecast surveys, and we introduce an adjustment to the RPS, based on the Yates decomposition, to improve its comparative measurement of forecaster performance in the face of differential non-response. The new measure, denoted RPS*, is recommended to analysts of forecast surveys.  相似文献   

2.
We compare multivariate and univariate approaches to assessing the accuracy of competing density forecasts of a portfolio return in the downside part of the support. We argue that the common practice of performing multivariate forecast comparisons can be problematic in the context of assessing portfolio risk, since better multivariate forecasts do not necessarily correspond to better aggregate portfolio return forecasts. This is illustrated by examples that involve (skew) elliptical distributions and an application to daily returns of a number of US stock prices. In addition, time-varying test statistics and Value-at-Risk forecasts provide empirical evidence of regime changes over the last decades.  相似文献   

3.
This paper studies performance of factor-based forecasts using differenced and nondifferenced data. Approximate variances of forecasting errors from the two forecasts are derived and compared. It is reported that the forecast using nondifferenced data tends to be more accurate than that using differenced data. This paper conducts simulations to compare root mean squared forecasting errors of the two competing forecasts. Simulation results indicate that forecasting using nondifferenced data performs better. The advantage of using nondifferenced data is more pronounced when a forecasting horizon is long and the number of factors is large. This paper applies the two competing forecasting methods to 68 I(1) monthly US macroeconomic variables across a range of forecasting horizons and sampling periods. We also provide detailed forecasting analysis on US inflation and industrial production. We find that forecasts using nondifferenced data tend to outperform those using differenced data.  相似文献   

4.
In the evaluation of economic forecasts, it is frequently the case that comparisons are made between a number of competing predictors. A natural question to ask in such contexts is whether one forecast encompasses its competitors, in the sense that they contain no useful information not present in the superior forecast. We develop tests for this notion of multiple forecast encompassing which are robust to properties expected in the forecast errors, and apply the tests to forecasts of UK growth and inflation. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper we demonstrate that forecast encompassing tests are valuable tools in getting an insight into why competing forecasts may be combined to produce a composite forecast which is superior to the individual forecasts. We also argue that results from forecast encompassing tests are potentially useful in model specification. We illustrate this using forecasts of quarterly UK consumption expenditure data from three classes of models: ARIMA, DHSY and VAR models.  相似文献   

6.
This article provides a first analysis of the forecasts of inflation and GDP growth obtained from the Bank of England's Survey of External Forecasters, considering both the survey average forecasts published in the quarterly Inflation Report, and the individual survey responses, recently made available by the Bank. These comprise a conventional incomplete panel dataset, with an additional dimension arising from the collection of forecasts at several horizons; both point forecasts and density forecasts are collected. The inflation forecasts show good performance in tests of unbiasedness and efficiency, albeit over a relatively calm period for the UK economy, and there is considerable individual heterogeneity. For GDP growth, inaccurate real-time data and their subsequent revisions are seen to cause serious difficulties for forecast construction and evaluation, although the forecasts are again unbiased. There is evidence that some forecasters have asymmetric loss functions.  相似文献   

7.
Macroeconomic forecasts are frequently produced, widely published, intensively discussed, and comprehensively used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyze some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC), and the ECB, are typically based on econometric model forecasts jointly with human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes nonstandard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model and intuition; and (iii) the two forecasts are generated from two distinct (but unknown) combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the (econometric) Staff of the Federal Reserve Board and the FOMC on inflation, unemployment, and real GDP growth. It is shown that the FOMC does not forecast significantly better than the Staff, and that the intuition of the FOMC does not add significantly in forecasting the actual values of the economic fundamentals. This would seem to belie the purported expertise of the FOMC.  相似文献   

8.
Decision makers often observe point forecasts of the same variable computed, for instance, by commercial banks, IMF and the World Bank, but the econometric models used by such institutions are frequently unknown. This paper shows how to use the information available on point forecasts to compute optimal density forecasts. Our idea builds upon the combination of point forecasts under general loss functions and unknown forecast error distributions. We use real‐time data to forecast the density of US inflation. The results indicate that the proposed method materially improves the real‐time accuracy of density forecasts vis‐à‐vis those from the (unknown) individual econometric models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
We propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. These scoring rules can be interpreted in terms of Kullback-Leibler divergence between weighted versions of the density forecast and the true density. Existing scoring rules based on weighted likelihood favor density forecasts with more probability mass in the given region, rendering predictive accuracy tests biased toward such densities. Using our novel likelihood-based scoring rules avoids this problem.  相似文献   

10.
Some recent papers have demonstrated that combining bagging (bootstrap aggregating) with exponential smoothing methods can produce highly accurate forecasts and improve the forecast accuracy relative to traditional methods. We therefore propose a new approach that combines the bagging, exponential smoothing and clustering methods. The existing methods use bagging to generate and aggregate groups of forecasts in order to reduce the variance. However, none of them consider the effect of covariance among the group of forecasts, even though it could have a dramatic impact on the variance of the group, and therefore on the forecast accuracy. The proposed approach, referred to here as Bagged.Cluster.ETS, aims to reduce the covariance effect by using partitioning around medoids (PAM) to produce clusters of similar forecasts, then selecting several forecasts from each cluster to create a group with a reduced variance. This approach was tested on various different time series sets from the M3 and CIF 2016 competitions. The empirical results have shown a substantial reduction in the forecast error, considering sMAPE and MASE.  相似文献   

11.
Since Quenouille's influential work on multiple time series, much progress has been made towards the goal of parameter reduction and model fit. Relatively less attention has been paid to the systematic evaluation of out-of-sample forecast performance of multivariate time series models. In this paper, we update the hog data set studied by Quenouille (and other researchers who followed him). We re-estimate his model with extended observations (1867–1966), and generate recursive one- to four-steps-ahead forecasts for the period of 1967 through 2000. These forecasts are compared to forecasts from an unrestricted vector autoregression, a reduced rank regression model, an index model and a cointegration-based error correction model. The error correction model that takes into account both nonstationarity of the data and rank reduction performs best at all four forecasting horizons. However, differences among competing models are statistically insignificant in most cases. No model consistently encompasses the others at all four horizons.  相似文献   

12.
To improve the predictability of crude oil futures market returns, this paper proposes a new combination approach based on principal component analysis (PCA). The PCA combination approach combines individual forecasts given by all PCA subset regression models that use all potential predictor subsets to construct PCA indexes. The proposed method can not only guard against over-fitting by employing the PCA technique but also reduce forecast variance due to extensive forecast combinations, thus benefiting from both the combination of information and the combination of forecasts. Showing impressive out-of-sample forecasting performance, the PCA combination approach outperforms a benchmark model and many related competing models. Furthermore, a mean–variance investor can realize sizeable utility gains by using the PCA combination forecasts relative to the competing forecasts from an asset allocation perspective.  相似文献   

13.
A desirable property of a forecast is that it encompasses competing predictions, in the sense that the accuracy of the preferred forecast cannot be improved through linear combination with a rival prediction. In this paper, we investigate the impact of the uncertainty associated with estimating model parameters in‐sample on the encompassing properties of out‐of‐sample forecasts. Specifically, using examples of non‐nested econometric models, we show that forecasts from the true (but estimated) data generating process (DGP) do not encompass forecasts from competing mis‐specified models in general, particularly when the number of in‐sample observations is small. Following this result, we also examine the scope for achieving gains in accuracy by combining the forecasts from the DGP and mis‐specified models.  相似文献   

14.
This paper develops a Bayesian variant of global vector autoregressive (B‐GVAR) models to forecast an international set of macroeconomic and financial variables. We propose a set of hierarchical priors and compare the predictive performance of B‐GVAR models in terms of point and density forecasts for one‐quarter‐ahead and four‐quarter‐ahead forecast horizons. We find that forecasts can be improved by employing a global framework and hierarchical priors which induce country‐specific degrees of shrinkage on the coefficients of the GVAR model. Forecasts from various B‐GVAR specifications tend to outperform forecasts from a naive univariate model, a global model without shrinkage on the parameters and country‐specific vector autoregressions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

16.
We propose a new way of selecting among model forms in automated exponential smoothing routines, consequently enhancing their predictive power. The procedure, here addressed as treating, operates by selectively subsetting the ensemble of competing models based on information from their prediction intervals. By the same token, we set forth a pruning strategy to improve the accuracy of both point forecasts and prediction intervals in forecast combination methods. The proposed approaches are respectively applied to automated exponential smoothing routines and Bagging algorithms, to demonstrate their potential. An empirical experiment is conducted on a wide range of series from the M-Competitions. The results attest that the proposed approaches are simple, without requiring much additional computational cost, but capable of substantially improving forecasting accuracy for both point forecasts and prediction intervals, outperforming important benchmarks and recently developed forecast combination methods.  相似文献   

17.
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases factor methods have been traditionally used, but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic dataset containing 168 variables. We find that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Typically, we find that the simple Minnesota prior forecasts well in medium and large VARs, which makes this prior attractive relative to computationally more demanding alternatives. Our empirical results show the importance of using forecast metrics based on the entire predictive density, instead of relying solely on those based on point forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
This paper develops a testing framework for comparing the predictive accuracy of competing multivariate density forecasts with different predictive copulas, focusing on specific parts of the copula support. The tests are framed in the context of the Kullback–Leibler Information Criterion, using (out-of-sample) conditional likelihood and censored likelihood in order to focus the evaluation on the region of interest. Monte Carlo simulations document that the resulting test statistics have satisfactory size and power properties for realistic sample sizes. In an empirical application to daily changes of yields on government bonds of the G7 countries we obtain insights into why the Student-t and Clayton mixture copula outperforms the other copulas considered; mixing in the Clayton copula with the t-copula is of particular importance to obtain high forecast accuracy in periods of jointly falling yields.  相似文献   

19.
We consider a method for producing multivariate density forecasts that satisfy moment restrictions implied by economic theory, such as Euler conditions. The method starts from a base forecast that might not satisfy the theoretical restrictions and forces it to satisfy the moment conditions using exponential tilting. Although exponential tilting has been considered before in a Bayesian context (Robertson et al. 2005), our main contributions are: (1) to adapt the method to a classical inferential context with out-of-sample evaluation objectives and parameter estimation uncertainty; and (2) to formally discuss the conditions under which the method delivers improvements in forecast accuracy. An empirical illustration which incorporates Euler conditions into forecasts produced by Bayesian vector autoregressions shows that the improvements in accuracy can be sizable and significant.  相似文献   

20.
This study investigates the performance of a composite forecast of inflation for the period 1969:I–1992:IV. This composite forecast is generated by combining the forecasts of four methods commonly used to measure expected inflation. Initially, the results of conditional efficiency tests suggest that a composite forecast can improve performance by encompassing a wider information set. The evidence, from the comparison of various forecast series, shows that the composite forecast improves on the performance of the four individual forecasts and an alternative composite forecast in terms of accuracy and rationality criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号