首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose new methods for evaluating predictive densities. The methods include Kolmogorov–Smirnov and Cramér–von Mises-type tests for the correct specification of predictive densities robust to dynamic mis-specification. The novelty is that the tests can detect mis-specification in the predictive densities even if it appears only over a fraction of the sample, due to the presence of instabilities. Our results indicate that our tests are well sized and have good power in detecting mis-specification in predictive densities, even when it is time-varying. An application to density forecasts of the Survey of Professional Forecasters demonstrates the usefulness of the proposed methodologies.  相似文献   

2.
This paper considers finite sample motivated structural change tests in the multivariate linear regression model with application to energy demand models, in which case commonly used structural change tests remain asymptotic. As in Dufour and Kiviet [1996. Exact tests for structural change in first-order dynamic models. Journal of Econometrics 70, 39–68], we account for intervening nuisance parameters through a two-stage maximized Monte Carlo test procedure. Our contributions can be classified into five categories: (i) we extend tests for which a finite-sample theory has been supplied for Gaussian distributions to the non-Gaussian context; (ii) we show that Bai et al. [1998. Testing and dating common breaks in multi-variate time series. The Review of Economic Studies 65 (3), 395–432] test severely over-rejects and propose exact variants of this test; (iii) we consider predictive break test approaches which generalize tests in Dufour [1980. Dummy variables and predictive tests for structural change. Economics Letters 6, 241–247] and Dufour and Kiviet [1996. Exact tests for structural change in first-order dynamic models. Journal of Econometrics 70, 39–68]; (iv) we propose exact (non-Bonferonni based) extensions of the multivariate outliers test from Wilks [1963. Multivariate statistical outliers. Sankhya Series A 25, 407–426] to models with covariates; (v) we apply these tests to the energy demand system analyzed by Arsenault et al. [1995. A total energy demand model of Québec: forecasting properties. Energy Economics 17 (2), 163–171]. For two out of the six industrial sectors analyzed over the 1962–2000 period, break and further goodness-of-fit and diagnostic tests allow to identify (and correct) specification problems arising from historical regulatory changes or (possibly random) industry-specific effects. The procedures we propose have potential useful applications in statistics, econometrics and finance (e.g. event studies).  相似文献   

3.
We propose a Bayesian combination approach for multivariate predictive densities which relies upon a distributional state space representation of the combination weights. Several specifications of multivariate time-varying weights are introduced with a particular focus on weight dynamics driven by the past performance of the predictive densities and the use of learning mechanisms. In the proposed approach the model set can be incomplete, meaning that all models can be individually misspecified. A Sequential Monte Carlo method is proposed to approximate the filtering and predictive densities. The combination approach is assessed using statistical and utility-based performance measures for evaluating density forecasts of simulated data, US macroeconomic time series and surveys of stock market prices. Simulation results indicate that, for a set of linear autoregressive models, the combination strategy is successful in selecting, with probability close to one, the true model when the model set is complete and it is able to detect parameter instability when the model set includes the true model that has generated subsamples of data. Also, substantial uncertainty appears in the weights when predictors are similar; residual uncertainty reduces when the model set is complete; and learning reduces this uncertainty. For the macro series we find that incompleteness of the models is relatively large in the 1970’s, the beginning of the 1980’s and during the recent financial crisis, and lower during the Great Moderation; the predicted probabilities of recession accurately compare with the NBER business cycle dating; model weights have substantial uncertainty attached. With respect to returns of the S&P 500 series, we find that an investment strategy using a combination of predictions from professional forecasters and from a white noise model puts more weight on the white noise model in the beginning of the 1990’s and switches to giving more weight to the professional forecasts over time. Information on the complete predictive distribution and not just on some moments turns out to be very important, above all during turbulent times such as the recent financial crisis. More generally, the proposed distributional state space representation offers great flexibility in combining densities.  相似文献   

4.
A popular macroeconomic forecasting strategy utilizes many models to hedge against instabilities of unknown timing; see (among others) Stock and Watson (2004), Clark and McCracken (2010), and Jore et al. (2010). Existing studies of this forecasting strategy exclude dynamic stochastic general equilibrium (DSGE) models, despite the widespread use of these models by monetary policymakers. In this paper, we use the linear opinion pool to combine inflation forecast densities from many vector autoregressions (VARs) and a policymaking DSGE model. The DSGE receives a substantial weight in the pool (at short horizons) provided the VAR components exclude structural breaks. In this case, the inflation forecast densities exhibit calibration failure. Allowing for structural breaks in the VARs reduces the weight on the DSGE considerably, but produces well-calibrated forecast densities for inflation.  相似文献   

5.
We introduce quasi-likelihood ratio tests for one sided multivariate hypotheses to evaluate the null that a parsimonious model performs equally well as a small number of models which nest the benchmark. The limiting distributions of the test statistics are non-standard. For critical values we consider: (i) bootstrapping and (ii) simulations assuming normality of the mean square prediction error difference. The proposed tests have good size and power properties compared with existing equal and superior predictive ability tests for multiple model comparison. We apply our tests to study the predictive ability of a Phillips curve type for the US core inflation.  相似文献   

6.
We evaluate conditional predictive densities for US output growth and inflation using a number of commonly-used forecasting models that rely on large numbers of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly-used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can cause point forecasts to either improve or deteriorate, they might have the opposite effect on higher moments. We find that normality is rejected for most models in some dimension according to at least one of the tests we use. Interestingly, however, combinations of predictive densities appear to be approximated correctly by a normal density: the simple, equal average when predicting output growth, and the Bayesian model average when predicting inflation.  相似文献   

7.
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-tt densities that approximates accurately the target distribution–typically a posterior distribution, of which we only require a kernel–in the sense that the Kullback–Leibler divergence between target and mixture is minimized. We label this approach Mixture of  ttby Importance Sampling weighted Expectation Maximization (MitISEM). The constructed mixture is used as a candidate density for quick and reliable application of either Importance Sampling (IS) or the Metropolis–Hastings (MH) method. We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously, while the quality of the approximation remains almost unchanged. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach. This is useful for importance or Metropolis–Hastings sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model’s mixture regimes’ parameters. Third, we propose a partial MitISEM approach, which aims at approximating the joint distribution by estimating a product of marginal and conditional distributions. This division can substantially reduce the dimension of the approximation problem, which facilitates the application of adaptive importance sampling for posterior simulation in more complex models with larger numbers of parameters. Our results indicate that the proposed methods can substantially reduce the computational burden in econometric models like DCC or mixture GARCH models and a mixture instrumental variables model.  相似文献   

8.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

9.
We propose non-nested hypothesis tests for conditional moment restriction models based on the method of generalized empirical likelihood (GEL). By utilizing the implied GEL probabilities from a sequence of unconditional moment restrictions that contains equivalent information of the conditional moment restrictions, we construct Kolmogorov–Smirnov and Cramér–von Mises type moment encompassing tests. Advantages of our tests over Otsu and Whang’s (2011) tests are: (i) they are free from smoothing parameters, (ii) they can be applied to weakly dependent data, and (iii) they allow non-smooth moment functions. We derive the null distributions, validity of a bootstrap procedure, and local and global power properties of our tests. The simulation results show that our tests have reasonable size and power performance in finite samples.  相似文献   

10.
11.
This paper presents a general statistical framework for estimation, testing and comparison of asset pricing models using the unconstrained distance measure of Hansen and Jagannathan (1997). The limiting results cover both linear and nonlinear models that could be correctly specified or misspecified. We propose modified versions of the existing model selection tests and new pivotal specification and model comparison tests with improved finite-sample properties. In addition, we provide formal tests of multiple model comparison. The excellent size and power properties of the proposed tests are demonstrated using simulated data from linear and nonlinear asset pricing models.  相似文献   

12.
We propose a parametric block wild bootstrap approach to compute density forecasts for various types of mixed‐data sampling (MIDAS) regressions. First, Monte Carlo simulations show that predictive densities for the various MIDAS models derived from the block wild bootstrap approach are more accurate in terms of coverage rates than predictive densities derived from either a residual‐based bootstrap approach or by drawing errors from a normal distribution. This result holds whether the data‐generating errors are normally independently distributed, serially correlated, heteroskedastic or a mixture of normal distributions. Second, we evaluate density forecasts for quarterly US real output growth in an empirical exercise, exploiting information from typical monthly and weekly series. We show that the block wild bootstrapping approach, applied to the various MIDAS regressions, produces predictive densities for US real output growth that are well calibrated. Moreover, relative accuracy, measured in terms of the logarithmic score, improves for the various MIDAS specifications as more information becomes available. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
In empirical studies, the probit and logit models are often used without checks for their competing distributional specifications. It is also rare for econometric tests to be focused on this issue. Santos Silva [Journal of Applied Econometrics (2001 ), Vol. 16, pp. 577–597] is an important recent exception. By using the conditional moment test principle, we discuss a wide class of non‐nested tests that can easily be applied to detect the competing distributions for the binary response models. This class of tests includes the test of Santos Silva (2001 ) for the same task as a particular example and provides other useful alternatives. We also compare the performance of these tests by a Monte Carlo simulation.  相似文献   

14.
We propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. These scoring rules can be interpreted in terms of Kullback-Leibler divergence between weighted versions of the density forecast and the true density. Existing scoring rules based on weighted likelihood favor density forecasts with more probability mass in the given region, rendering predictive accuracy tests biased toward such densities. Using our novel likelihood-based scoring rules avoids this problem.  相似文献   

15.
In this paper, we introduce several test statistics testing the null hypothesis of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure and the trend. We derive analytical limiting distributions for all the tests. The power performance of the tests is compared with that of the unit‐root tests by Phillips and Perron [Biometrika (1988), Vol. 75, pp. 335–346], and Leybourne, Newbold and Vougas [Journal of Time Series Analysis (1998), Vol. 19, pp. 83–97]. In the presence of a gradual change in the deterministics and in the dynamics, our tests are superior in terms of power.  相似文献   

16.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

17.
Many forecasts are conditional in nature. For example, a number of central banks routinely report forecasts conditional on particular paths of policy instruments. Even though conditional forecasting is common, there has been little work on methods for evaluating conditional forecasts. This paper provides analytical, Monte Carlo and empirical evidence on tests of predictive ability for conditional forecasts from estimated models. In the empirical analysis, we examine conditional forecasts obtained with a VAR in the variables included in the DSGE model of Smets and Wouters (American Economic Review 2007; 97 : 586–606). Throughout the analysis, we focus on tests of bias, efficiency and equal accuracy applied to conditional forecasts from VAR models. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
The paper proposes a novel inference procedure for long-horizon predictive regression with persistent regressors, allowing the autoregressive roots to lie in a wide vicinity of unity. The invalidity of conventional tests when regressors are persistent has led to a large literature dealing with inference in predictive regressions with local to unity regressors. Magdalinos and Phillips (2009b) recently developed a new framework of extended IV procedures (IVX) that enables robust chi-square testing for a wider class of persistent regressors. We extend this robust procedure to an even wider parameter space in the vicinity of unity and apply the methods to long-horizon predictive regression. Existing methods in this model, which rely on simulated critical values by inverting tests under local to unity conditions, cannot be easily extended beyond the scalar regressor case or to wider autoregressive parametrizations. In contrast, the methods developed here lead to standard chi-square tests, allow for multivariate regressors, and include predictive processes whose roots may lie in a wide vicinity of unity. As such they have many potential applications in predictive regression. In addition to asymptotics under the null hypothesis of no predictability, the paper investigates validity under the alternative, showing how balance in the regression may be achieved through the use of localizing coefficients and developing local asymptotic power properties under such alternatives. These results help to explain some of the empirical difficulties that have been encountered in establishing predictability of stock returns.  相似文献   

19.
We propose a class of distribution-free rank-based tests for the null hypothesis of a unit root. This class is indexed by the choice of a reference densityg, which need not coincide with the unknown actual innovation density f. The validity of these tests, in terms of exact finite-sample size, is guaranteed, irrespective of the actual underlying density, by distribution-freeness. Those tests are locally and asymptotically optimal under a particular asymptotic scheme, for which we provide a complete analysis of asymptotic relative efficiencies. Rather than stressing asymptotic optimality, however, we emphasize finite-sample performances, which also depend, quite heavily, on initial values. It appears that our rank-based tests significantly outperform the traditional Dickey-Fuller tests, as well as the more recent procedures proposed by Elliott et al. (1996), Ng and Perron (2001), and Elliott and Müller (2006), for a broad range of initial values and for heavy-tailed innovation densities. Thus, they provide a useful complement to existing techniques.  相似文献   

20.
To verify whether data are missing at random (MAR) we need to observe the missing data. There are only two exceptions: when the relationship between the probability of responding and the missing variables is either imposed by introducing untestable assumptions or recovered using additional data sources. In this paper, we briefly review the estimation and test procedures for selectivity in panel data. Furthermore, by extending the MAR definition from a static setting to the case of dynamic panel data models, we prove that some tests for selectivity are not verifying the MAR condition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号