首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we address the question of which subset of time series should be selected among a given set in order to forecast another series. We evaluate the quality of the forecasts in terms of Mean Squared Error. We propose a family of criteria to estimate the optimal subset. Consistency results are proved, both in the weak (in probability) and strong (almost sure) sense. We present the results of a Monte Carlo experiment and a real data example in which the criteria are compared to some hypothesis tests such as the ones by Diebold and Mariano (1995),  and  and Giacomini and White (2006).  相似文献   

2.
Volatility forecast comparison using imperfect volatility proxies   总被引:1,自引:0,他引:1  
The use of a conditionally unbiased, but imperfect, volatility proxy can lead to undesirable outcomes in standard methods for comparing conditional variance forecasts. We motivate our study with analytical results on the distortions caused by some widely used loss functions, when used with standard volatility proxies such as squared returns, the intra-daily range or realised volatility. We then derive necessary and sufficient conditions on the functional form of the loss function for the ranking of competing volatility forecasts to be robust to the presence of noise in the volatility proxy, and derive some useful special cases of this class of “robust” loss functions. The methods are illustrated with an application to the volatility of returns on IBM over the period 1993 to 2003.  相似文献   

3.
Least squares model averaging by Mallows criterion   总被引:1,自引:0,他引:1  
This paper is in response to a recent paper by Hansen (2007) who proposed an optimal model average estimator with weights selected by minimizing a Mallows criterion. The main contribution of Hansen’s paper is a demonstration that the Mallows criterion is asymptotically equivalent to the squared error, so the model average estimator that minimizes the Mallows criterion also minimizes the squared error in large samples. We are concerned with two assumptions that accompany Hansen’s approach. The first is the assumption that the approximating models are strictly nested in a way that depends on the ordering of regressors. Often there is no clear basis for the ordering and the approach does not permit non-nested models which are more realistic from a practical viewpoint. Second, for the optimality result to hold the model weights are required to lie within a special discrete set. In fact, Hansen noted both difficulties and called for extensions of the proof techniques. We provide an alternative proof which shows that the result on the optimality of the Mallows criterion in fact holds for continuous model weights and under a non-nested set-up that allows any linear combination of regressors in the approximating models that make up the model average estimator. These results provide a stronger theoretical basis for the use of the Mallows criterion in model averaging by strengthening existing findings.  相似文献   

4.
Bayesian averaging,prediction and nonnested model selection   总被引:1,自引:0,他引:1  
This paper studies the asymptotic relationship between Bayesian model averaging and post-selection frequentist predictors in both nested and nonnested models. We derive conditions under which their difference is of a smaller order of magnitude than the inverse of the square root of the sample size in large samples. This result depends crucially on the relation between posterior odds and frequentist model selection criteria. Weak conditions are given under which consistent model selection is feasible, regardless of whether models are nested or nonnested and regardless of whether models are correctly specified or not, in the sense that they select the best model with the least number of parameters with probability converging to 1. Under these conditions, Bayesian posterior odds and BICs are consistent for selecting among nested models, but are not consistent for selecting among nonnested models and possibly overlapping models. These findings have important bearing for applied researchers who are frequent users of model selection tools for empirical investigation of model predictions.  相似文献   

5.
Model averaging by jackknife criterion in models with dependent data   总被引:1,自引:0,他引:1  
The past decade witnessed a literature on model averaging by frequentist methods. For the most part, the asymptotic optimality of various existing frequentist model averaging estimators has been established under i.i.d. errors. Recently, Hansen and Racine [Hansen, B.E., Racine, J., 2012. Jackknife model averaging. Journal of Econometrics 167, 38–46] developed a jackknife model averaging (JMA) estimator, which has an important advantage over its competitors in that it achieves the lowest possible asymptotic squared error under heteroscedastic errors. In this paper, we broaden Hansen and Racine’s scope of analysis to encompass models with (i) a non-diagonal error covariance structure, and (ii) lagged dependent variables, thus allowing for dependent data. We show that under these set-ups, the JMA estimator is asymptotically optimal by a criterion equivalent to that used by Hansen and Racine. A Monte Carlo study demonstrates the finite sample performance of the JMA estimator in a variety of model settings.  相似文献   

6.
Motivated by the common finding that linear autoregressive models often forecast better than models that incorporate additional information, this paper presents analytical, Monte Carlo and empirical evidence on the effectiveness of combining forecasts from nested models. In our analytics, the unrestricted model is true, but a subset of the coefficients is treated as being local‐to‐zero. This approach captures the practical reality that the predictive content of variables of interest is often low. We derive mean square error‐minimizing weights for combining the restricted and unrestricted forecasts. Monte Carlo and empirical analyses verify the practical effectiveness of our combination approach.  相似文献   

7.
This paper considers forecasts with distribution functions that may vary through time. The forecast is achieved by time varying combinations of individual forecasts. We derive theoretical worst case bounds for general algorithms based on multiplicative updates of the combination weights. The bounds are useful for studying properties of forecast combinations when data are non-stationary and there is no unique best model.  相似文献   

8.
A popular macroeconomic forecasting strategy utilizes many models to hedge against instabilities of unknown timing; see (among others) Stock and Watson (2004), Clark and McCracken (2010), and Jore et al. (2010). Existing studies of this forecasting strategy exclude dynamic stochastic general equilibrium (DSGE) models, despite the widespread use of these models by monetary policymakers. In this paper, we use the linear opinion pool to combine inflation forecast densities from many vector autoregressions (VARs) and a policymaking DSGE model. The DSGE receives a substantial weight in the pool (at short horizons) provided the VAR components exclude structural breaks. In this case, the inflation forecast densities exhibit calibration failure. Allowing for structural breaks in the VARs reduces the weight on the DSGE considerably, but produces well-calibrated forecast densities for inflation.  相似文献   

9.
Unit‐root testing can be a preliminary step in model development, an intermediate step, or an end in itself. Some researchers have questioned the value of any unit‐root and cointegration testing, arguing that restrictions based on theory are at least as effective. Such confusion is unsatisfactory. Needed is a set of principles that limit and define the role of the tacit knowledge of the model builders. In a forecasting context, we enumerate the various possible model selection strategies and, based on simulation and empirical evidence, recommend using these tests to improve the specification of an initial general vector autoregression model.  相似文献   

10.
This article compares the predictive ability of the factor models of Stock and Watson (2002a) and Forni, Hallin, Lippi and Reichlin (2005) using a ‘large’ panel of macroeconomic variables of the United States. We propose a nesting procedure of comparison that clarifies and partially overturns the results of similar exercises in the literature. Our main conclusion is that with the dataset at hand the two methods have a similar performance and produce highly collinear forecasts.  相似文献   

11.
A new method, called Relevant Transformation of the Inputs Network Approach is proposed as a tool for model building. It is designed around flexibility (with nonlinear transformations of the predictors of interest), selective search within the range of possible models, out‐of‐sample forecasting ability and computational simplicity. In tests on simulated data, it shows both a high rate of successful retrieval of the data generating process, which increases with the sample size and a good performance relative to other alternative procedures. A telephone service demand model is built to show how the procedure applies on real data.  相似文献   

12.
We propose new methods for evaluating predictive densities. The methods include Kolmogorov–Smirnov and Cramér–von Mises-type tests for the correct specification of predictive densities robust to dynamic mis-specification. The novelty is that the tests can detect mis-specification in the predictive densities even if it appears only over a fraction of the sample, due to the presence of instabilities. Our results indicate that our tests are well sized and have good power in detecting mis-specification in predictive densities, even when it is time-varying. An application to density forecasts of the Survey of Professional Forecasters demonstrates the usefulness of the proposed methodologies.  相似文献   

13.
Understanding models’ forecasting performance   总被引:1,自引:0,他引:1  
We propose a new methodology to identify the sources of models’ forecasting performance. The methodology decomposes the models’ forecasting performance into asymptotically uncorrelated components that measure instabilities in the forecasting performance, predictive content, and over-fitting. The empirical application shows the usefulness of the new methodology for understanding the causes of the poor forecasting ability of economic models for exchange rate determination.  相似文献   

14.
We provide an extensive evaluation of the predictive performance of the US yield curve for US gross domestic product growth by using new tests for forecast breakdown, in addition to a variety of in‐sample and out‐of‐sample evaluation procedures. Empirical research over the past decades has uncovered a strong predictive relationship between the yield curve and output growth, whose stability has recently been questioned. We document the existence of a forecast breakdown during the Burns–Miller and Volker monetary policy regimes, whereas during the early part of the Greenspan era the yield curve emerged as a more reliable model to predict future economic activity.  相似文献   

15.
This paper presents new methods for comparing the accuracy of estimators of the quadratic variation of a price process. I provide conditions under which the relative accuracy of competing estimators can be consistently estimated (as T), and show that forecast evaluation tests may be adapted to the problem of ranking these estimators. The proposed methods avoid making specific assumptions about microstructure noise, and facilitate comparisons of estimators that would be difficult using methods from the extant literature, such as those based on different sampling schemes. An application to high frequency IBM data between 1996 and 2007 illustrates the new methods.  相似文献   

16.
We introduce a new class of models that has both stochastic volatility and moving average errors, where the conditional mean has a state space representation. Having a moving average component, however, means that the errors in the measurement equation are no longer serially independent, and estimation becomes more difficult. We develop a posterior simulator that builds upon recent advances in precision-based algorithms for estimating these new models. In an empirical application involving US inflation we find that these moving average stochastic volatility models provide better in-sample fitness and out-of-sample forecast performance than the standard variants with only stochastic volatility.  相似文献   

17.
It is a feature of competitive markets with forward-looking participants that a good’s benefit and its production cost are equalized in equilibrium and that no resources are wasted during the adjustment process. For housing markets, there is mixed evidence whether they meet this standard of allocative efficiency. Based on a unique data set with rich information on prices and cost, we examine the market for single-family houses in Germany’s capital Berlin. At the aggregate market level, we find that prices and cost tend to equalize in the long run. Short-run adjustment appears to be sufficiently fast and properly anticipated to prevent systematic excess profit opportunities. At the cross sectional level of individual houses, we find support that resources are allocated efficiently between different market segments. Taken together, our results provide sufficient evidence that the market in Berlin is efficient.  相似文献   

18.
We propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. These scoring rules can be interpreted in terms of Kullback-Leibler divergence between weighted versions of the density forecast and the true density. Existing scoring rules based on weighted likelihood favor density forecasts with more probability mass in the given region, rendering predictive accuracy tests biased toward such densities. Using our novel likelihood-based scoring rules avoids this problem.  相似文献   

19.
This paper conducts a broad-based comparison of iterated and direct multi-period forecasting approaches applied to both univariate and multivariate models in the form of parsimonious factor-augmented vector autoregressions. To account for serial correlation in the residuals of the multi-period direct forecasting models we propose a new SURE-based estimation method and modified Akaike information criteria for model selection. Empirical analysis of the 170 variables studied by Marcellino, Stock and Watson (2006) shows that information in factors helps improve forecasting performance for most types of economic variables although it can also lead to larger biases. It also shows that SURE estimation and finite-sample modifications to the Akaike information criterion can improve the performance of the direct multi-period forecasts.  相似文献   

20.
Ploberger and Phillips (Econometrica, Vol. 71, pp. 627–673, 2003) proved a result that provides a bound on how close a fitted empirical model can get to the true model when the model is represented by a parameterized probability measure on a finite dimensional parameter space. The present note extends that result to cases where the parameter space is infinite dimensional. The results have implications for model choice in infinite dimensional problems and highlight some of the difficulties, including technical difficulties, presented by models of infinite dimension. Some implications for forecasting are considered and some applications are given, including the empirically relevant case of vector autoregression (VAR) models of infinite order.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号