首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

2.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

3.
The problem of testing non‐nested regression models that include lagged values of the dependent variable as regressors is discussed. It is argued that it is essential to test for error autocorrelation if ordinary least squares and the associated J and F tests are to be used. A heteroskedasticity–robust joint test against a combination of the artificial alternatives used for autocorrelation and non‐nested hypothesis tests is proposed. Monte Carlo results indicate that implementing this joint test using a wild bootstrap method leads to a well‐behaved procedure and gives better control of finite sample significance levels than asymptotic critical values.  相似文献   

4.
How to measure and model volatility is an important issue in finance. Recent research uses high‐frequency intraday data to construct ex post measures of daily volatility. This paper uses a Bayesian model‐averaging approach to forecast realized volatility. Candidate models include autoregressive and heterogeneous autoregressive specifications based on the logarithm of realized volatility, realized power variation, realized bipower variation, a jump and an asymmetric term. Applied to equity and exchange rate volatility over several forecast horizons, Bayesian model averaging provides very competitive density forecasts and modest improvements in point forecasts compared to benchmark models. We discuss the reasons for this, including the importance of using realized power variation as a predictor. Bayesian model averaging provides further improvements to density forecasts when we move away from linear models and average over specifications that allow for GARCH effects in the innovations to log‐volatility. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
6.
This paper proposes two new panel unit root tests based on Zaykin et al. (2002) ’s truncated product method. The first one assumes constant correlation between P‐values and the second one uses sieve bootstrap to allow for general forms of cross‐section dependence in the panel units. Monte Carlo simulation shows that both tests have reasonably good size and are powerful in cases of some very large P‐values. The proposed tests are applied to a panel of real GDP and inflation density forecasts, resulting in evidence that professional forecasters may not update their forecast precision in an optimal Bayesian way.  相似文献   

7.
This paper considers tests of the effectiveness of a policy intervention, defined as a change in the parameters of a policy rule, in the context of a macroeconometric dynamic stochastic general equilibrium (DSGE) model. We consider two types of intervention, first the standard case of a parameter change that does not alter the steady state, and second one that does alter the steady state, e.g. the target rate of inflation. We consider two types of test, one a multi‐horizon test, where the postintervention policy horizon, H, is small and fixed, and a mean policy effect test where H is allowed to increase without bounds. The multi‐horizon test requires Gaussian errors, but the mean policy effect test does not. It is shown that neither of these two tests are consistent, in the sense that the power of the tests does not tend to unity as H→∞, unless the intervention alters the steady state. This follows directly from the fact that DSGE variables are measured as deviations from the steady state, and the effects of policy change on target variables decay exponentially fast. We investigate the size and power of the proposed mean effect test by simulating a standard three equation New Keynesian DSGE model. The simulation results are in line with our theoretical findings and show that in all applications the tests have the correct size; but unless the intervention alters the steady state, their power does not go to unity with H.  相似文献   

8.
Dickey and Fuller [Econometrica (1981) Vol. 49, pp. 1057–1072] suggested unit‐root tests for an autoregressive model with a linear trend conditional on an initial observation. TPower of tests for unit roots in the presence of a linear trendightly different model with a random initial value in which nuisance parameters can easily be eliminated by an invariant reduction of the model. We show that invariance arguments can also be used when comparing power within a conditional model. In the context of the conditional model, the Dickey–Fuller test is shown to be more stringent than a number of unit‐root tests motivated by models with random initial value. The power of the Dickey–Fuller test can be improved by making assumptions to the initial value. The practitioner therefore has to trade‐off robustness and power, as assumptions about initial values are hard to test, but can give more power.  相似文献   

9.
Heteroskedasticity and autocorrelation consistent (HAC) estimation commonly involves the use of prewhitening filters based on simple autoregressive models. In such applications, small sample bias in the estimation of autoregressive coefficients is transmitted to the recolouring filter, leading to HAC variance estimates that can be badly biased. The present paper provides an analysis of these issues using asymptotic expansions and simulations. The approach we recommend involves the use of recursive demeaning procedures that mitigate the effects of small‐sample autoregressive bias. Moreover, a commonly used restriction rule on the prewhitening estimates (that first‐order autoregressive coefficient estimates, or largest eigenvalues, >0.97 be replaced by 0.97) adversely interferes with the power of unit‐root and [ Kwiatkowski, Phillips, Schmidt and Shin (1992) Journal of Econometrics, Vol. 54, pp. 159–178] (KPSS) tests. We provide a new boundary condition rule that improves the size and power properties of these tests. Some illustrations of the effects of these adjustments on the size and power of KPSS testing are given. Using prewhitened HAC estimates and the new boundary condition rule, the KPSS test is consistent, in contrast to KPSS testing that uses conventional prewhitened HAC estimates [ Lee, J. S. (1996) Economics Letters, Vol. 51, pp. 131–137].  相似文献   

10.
In this paper, we use Monte Carlo (MC) testing techniques for testing linearity against smooth transition models. The MC approach allows us to introduce a new test that differs in two respects from the tests existing in the literature. First, the test is exact in the sense that the probability of rejecting the null when it is true is always less than or equal to the nominal size of the test. Secondly, the test is not based on an auxiliary regression obtained by replacing the model under the alternative by approximations based on a Taylor expansion. We also apply MC testing methods for size correcting the test proposed by Luukkonen, Saikkonen and Teräsvirta (Biometrika, Vol. 75, 1988, p. 491). The results show that the power loss implied by the auxiliary regression‐based test is non‐existent compared with a supremum‐based test but is more substantial when compared with the three other tests under consideration.  相似文献   

11.
This paper examines the asymptotic and finite‐sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (Econometrica 1989; 57 : 307–333). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out‐of‐sample version of the two‐step testing procedure recommended by Vuong but also show that an exact one‐step procedure is sometimes applicable. When the models are overlapping, we provide a simple‐to‐use fixed‐regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two‐step procedure is conservative, while the one‐step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting US real gross domestic product growth. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
Based on the series long run variance estimator, we propose a new class of over-identification tests that are robust to heteroscedasticity and autocorrelation of unknown forms. We show that when the number of terms used in the series long run variance estimator is fixed, the conventional J statistic, after a simple correction, is asymptotically F-distributed. We apply the idea of the F-approximation to the conventional kernel-based J tests. Simulations show that the J tests based on the finite sample corrected J statistic and the F-approximation have virtually no size distortion, and yet are as powerful as the standard J tests.  相似文献   

13.
Baumeister and Kilian (Journal of Business and Economic Statistics, 2015, 33(3), 338–351) combine forecasts from six empirical models to predict real oil prices. In this paper, we broadly reproduce their main economic findings, employing their preferred measures of the real oil price and other real‐time variables. Mindful of the importance of Brent crude oil as a global price benchmark, we extend consideration to the North Sea‐based measure and update the evaluation sample to 2017:12. We model the oil price futures curve using a factor‐based Nelson–Siegel specification estimated in real time to fill in missing values for oil price futures in the raw data. We find that the combined forecasts for Brent are as effective as for other oil price measures. The extended sample using the oil price measures adopted by Baumeister and Kilian yields similar results to those reported in their paper. Also, the futures‐based model improves forecast accuracy at longer horizons.  相似文献   

14.
In this paper, we introduce several test statistics testing the null hypothesis of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure and the trend. We derive analytical limiting distributions for all the tests. The power performance of the tests is compared with that of the unit‐root tests by Phillips and Perron [Biometrika (1988), Vol. 75, pp. 335–346], and Leybourne, Newbold and Vougas [Journal of Time Series Analysis (1998), Vol. 19, pp. 83–97]. In the presence of a gradual change in the deterministics and in the dynamics, our tests are superior in terms of power.  相似文献   

15.
Although out‐of‐sample forecast performance is often deemed to be the ‘gold standard’ of evaluation, it is not in fact a good yardstick for evaluating models in general. The arguments are illustrated with reference to a recent paper by Carruth, Hooker and Oswald [Review of Economics and Statistics (1998) , Vol. 80, pp. 621–628], who suggest that the good dynamic forecasts of their model support the efficiency‐wage theory on which it is based.  相似文献   

16.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

17.
We consider tests of forecast encompassing for probability forecasts, for both quadratic and logarithmic scoring rules. We propose test statistics for the null of forecast encompassing, present the limiting distributions of the test statistics, and investigate the impact of estimating the forecasting models' parameters on these distributions. The small‐sample performance is investigated, in terms of small numbers of forecasts and model estimation sample sizes. We show the usefulness of the tests for the evaluation of recession probability forecasts from logit models with different leading indicators as explanatory variables, and for evaluating survey‐based probability forecasts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
Decision makers often observe point forecasts of the same variable computed, for instance, by commercial banks, IMF and the World Bank, but the econometric models used by such institutions are frequently unknown. This paper shows how to use the information available on point forecasts to compute optimal density forecasts. Our idea builds upon the combination of point forecasts under general loss functions and unknown forecast error distributions. We use real‐time data to forecast the density of US inflation. The results indicate that the proposed method materially improves the real‐time accuracy of density forecasts vis‐à‐vis those from the (unknown) individual econometric models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Robust methods for instrumental variable inference have received considerable attention recently. Their analysis has raised a variety of problematic issues such as size/power trade‐offs resulting from weak or many instruments. We show that information reduction methods provide a useful and practical solution to this and related problems. Formally, we propose factor‐based modifications to three popular weak‐instrument‐robust statistics, and illustrate their validity asymptotically and in finite samples. Results are derived using asymptotic settings that are commonly used in both the factor and weak‐instrument literature. For the Anderson–Rubin statistic, we also provide analytical finite‐sample results that do not require any underlying factor structure. An illustrative Monte Carlo study reveals the following. Factor‐based tests control size regardless of instruments and factor quality. All factor‐based tests are systematically more powerful than standard counterparts. With informative instruments and in contrast to standard tests: (i) power of factor‐based tests is not affected by k even when large; and (ii) weak factor structure does not cost power. An empirical study on a New Keynesian macroeconomic model suggests that our factor‐based methods can bridge a number of gaps between structural and statistical modeling. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
In a cross‐section where the initial distribution of observations differs from the steady‐state distribution and initial values matter, convergence is best measured in terms of σ‐convergence over a fixed time period. For this setting, we propose a new simple Wald test for conditional σ‐convergence. According to our Monte Carlo simulations, this test performs well and its power is comparable with the available tests of unconditional convergence. We apply two versions of the test to conditional convergence in the size of European manufacturing firms. The null hypothesis of no convergence is rejected for all country groups, most single economies, and for younger firms of our sample of 49,646 firms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号