首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Motivated by the common finding that linear autoregressive models often forecast better than models that incorporate additional information, this paper presents analytical, Monte Carlo and empirical evidence on the effectiveness of combining forecasts from nested models. In our analytics, the unrestricted model is true, but a subset of the coefficients is treated as being local‐to‐zero. This approach captures the practical reality that the predictive content of variables of interest is often low. We derive mean square error‐minimizing weights for combining the restricted and unrestricted forecasts. Monte Carlo and empirical analyses verify the practical effectiveness of our combination approach.  相似文献   

2.
We investigate the added value of combining density forecasts focused on a specific region of support. We develop forecast combination schemes that assign weights to individual predictive densities based on the censored likelihood scoring rule and the continuous ranked probability scoring rule (CRPS) and compare these to weighting schemes based on the log score and the equally weighted scheme. We apply this approach in the context of measuring downside risk in equity markets using recently developed volatility models, including HEAVY, realized GARCH and GAS models, applied to daily returns on the S&P 500, DJIA, FTSE and Nikkei indexes from 2000 until 2013. The results show that combined density forecasts based on optimizing the censored likelihood scoring rule significantly outperform pooling based on equal weights, optimizing the CRPS or log scoring rule. In addition, 99% Value‐at‐Risk estimates improve when weights are based on the censored likelihood scoring rule.  相似文献   

3.
Least squares model averaging by Mallows criterion   总被引:1,自引:0,他引:1  
This paper is in response to a recent paper by Hansen (2007) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen’s paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen’s approach. The first is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic from a practical viewpoint. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.  相似文献   

4.
A desirable property of a forecast is that it encompasses competing predictions, in the sense that the accuracy of the preferred forecast cannot be improved through linear combination with a rival prediction. In this paper, we investigate the impact of the uncertainty associated with estimating model parameters in‐sample on the encompassing properties of out‐of‐sample forecasts. Specifically, using examples of non‐nested econometric models, we show that forecasts from the true (but estimated) data generating process (DGP) do not encompass forecasts from competing mis‐specified models in general, particularly when the number of in‐sample observations is small. Following this result, we also examine the scope for achieving gains in accuracy by combining the forecasts from the DGP and mis‐specified models.  相似文献   

5.
We consider the properties of weighted linear combinations of prediction models, or linear pools, evaluated using the log predictive scoring rule. Although exactly one model has limiting posterior probability, an optimal linear combination typically includes several models with positive weights. We derive several interesting results: for example, a model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with six prediction models. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools.  相似文献   

6.
This paper considers the problem of forecasting under continuous and discrete structural breaks and proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for one step ahead forecasts. Under continuous breaks, our approach largely recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for optimal weights in models with a single regressor, and asymptotically valid weights for models with more than one regressor. It is shown that in these cases the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain, a forecasting procedure based on robust optimal weights is proposed. The relative performance of our proposed approach is investigated using Monte Carlo experiments and an empirical application to forecasting real GDP using the yield curve across nine industrial economies.  相似文献   

7.
We propose new methods for evaluating predictive densities. The methods include Kolmogorov–Smirnov and Cramér–von Mises-type tests for the correct specification of predictive densities robust to dynamic mis-specification. The novelty is that the tests can detect mis-specification in the predictive densities even if it appears only over a fraction of the sample, due to the presence of instabilities. Our results indicate that our tests are well sized and have good power in detecting mis-specification in predictive densities, even when it is time-varying. An application to density forecasts of the Survey of Professional Forecasters demonstrates the usefulness of the proposed methodologies.  相似文献   

8.
Statistical post-processing techniques are now used widely for correcting systematic biases and errors in the calibration of ensemble forecasts obtained from multiple runs of numerical weather prediction models. A standard approach is the ensemble model output statistics (EMOS) method, which results in a predictive distribution that is given by a single parametric law, with parameters that depend on the ensemble members. This article assesses the merits of combining multiple EMOS models based on different parametric families. In four case studies with wind speed and precipitation forecasts from two ensemble prediction systems, we investigate the performances of state of the art forecast combination methods and propose a computationally efficient approach for determining linear pool combination weights. We study the performance of forecast combination compared to that of the theoretically superior but cumbersome estimation of a full mixture model, and assess which degree of flexibility of the forecast combination approach yields the best practical results for post-processing applications.  相似文献   

9.
We propose a parametric block wild bootstrap approach to compute density forecasts for various types of mixed‐data sampling (MIDAS) regressions. First, Monte Carlo simulations show that predictive densities for the various MIDAS models derived from the block wild bootstrap approach are more accurate in terms of coverage rates than predictive densities derived from either a residual‐based bootstrap approach or by drawing errors from a normal distribution. This result holds whether the data‐generating errors are normally independently distributed, serially correlated, heteroskedastic or a mixture of normal distributions. Second, we evaluate density forecasts for quarterly US real output growth in an empirical exercise, exploiting information from typical monthly and weekly series. We show that the block wild bootstrapping approach, applied to the various MIDAS regressions, produces predictive densities for US real output growth that are well calibrated. Moreover, relative accuracy, measured in terms of the logarithmic score, improves for the various MIDAS specifications as more information becomes available. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

11.
This paper considers forecasts with distribution functions that may vary through time. The forecast is achieved by time varying combinations of individual forecasts. We derive theoretical worst case bounds for general algorithms based on multiplicative updates of the combination weights. The bounds are useful for studying properties of forecast combinations when data are non-stationary and there is no unique best model.  相似文献   

12.
We evaluate conditional predictive densities for US output growth and inflation using a number of commonly-used forecasting models that rely on large numbers of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly-used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can cause point forecasts to either improve or deteriorate, they might have the opposite effect on higher moments. We find that normality is rejected for most models in some dimension according to at least one of the tests we use. Interestingly, however, combinations of predictive densities appear to be approximated correctly by a normal density: the simple, equal average when predicting output growth, and the Bayesian model average when predicting inflation.  相似文献   

13.
Least-squares forecast averaging   总被引:2,自引:0,他引:2  
This paper proposes forecast combination based on the method of Mallows Model Averaging (MMA). The method selects forecast weights by minimizing a Mallows criterion. This criterion is an asymptotically unbiased estimate of both the in-sample mean-squared error (MSE) and the out-of-sample one-step-ahead mean-squared forecast error (MSFE). Furthermore, the MMA weights are asymptotically mean-square optimal in the absence of time-series dependence. We show how to compute MMA weights in forecasting settings, and investigate the performance of the method in simple but illustrative simulation environments. We find that the MMA forecasts have low MSFE and have much lower maximum regret than other feasible forecasting methods, including equal weighting, BIC selection, weighted BIC, AIC selection, weighted AIC, Bates–Granger combination, predictive least squares, and Granger–Ramanathan combination.  相似文献   

14.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

15.
This paper proposes the use of forecast combination to improve predictive accuracy in forecasting the U.S. business cycle index, as published by the Business Cycle Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly forecast utilising the well-established coincident indicators and yield curve models, allowing for dynamics and real-time data revisions. Forecast combinations use log-score and quadratic-score based weights, which change over time. This paper finds that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's own forecasting performance.  相似文献   

16.
We compare alternative forecast pooling methods and 58 forecasts from linear, time‐varying and non‐linear models, using a very large dataset of about 500 macroeconomic variables for the countries in the European Monetary Union. On average, combination methods work well but single non‐linear models can outperform them for several series. The performance of pooled forecasts, and of non‐linear models, improves when focusing on a subset of unstable series, but the gains are minor. Finally, on average over the EMU countries, the pooled forecasts behave well for industrial production growth, unemployment and inflation, but they are often beaten by non‐linear models for each country and variable.  相似文献   

17.
It has been documented that random walk outperforms most economic structural and time series models in out-of-sample forecasts of the conditional mean dynamics of exchange rates. In this paper, we study whether random walk has similar dominance in out-of-sample forecasts of the conditional probability density of exchange rates given that the probability density forecasts are often needed in many applications in economics and finance. We first develop a nonparametric portmanteau test for optimal density forecasts of univariate time series models in an out-of-sample setting and provide simulation evidence on its finite sample performance. Then we conduct a comprehensive empirical analysis on the out-of-sample performances of a wide variety of nonlinear time series models in forecasting the intraday probability densities of two major exchange rates—Euro/Dollar and Yen/Dollar. It is found that some sophisticated time series models that capture time-varying higher order conditional moments, such as Markov regime-switching models, have better density forecasts for exchange rates than random walk or modified random walk with GARCH and Student-t innovations. This finding dramatically differs from that on mean forecasts and suggests that sophisticated time series models could be useful in out-of-sample applications involving the probability density.  相似文献   

18.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

19.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

20.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号