首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
Panel unit‐root and no‐cointegration tests that rely on cross‐sectional independence of the panel unit experience severe size distortions when this assumption is violated, as has, for example, been shown by Banerjee, Marcellino and Osbat [Econometrics Journal (2004), Vol. 7, pp. 322–340; Empirical Economics (2005), Vol. 30, pp. 77–91] via Monte Carlo simulations. Several studies have recently addressed this issue for panel unit‐root tests using a common factor structure to model the cross‐sectional dependence, but not much work has been done yet for panel no‐cointegration tests. This paper proposes a model for panel no‐cointegration using an unobserved common factor structure, following the study by Bai and Ng [Econometrica (2004), Vol. 72, pp. 1127–1177] for panel unit roots. We distinguish two important cases: (i) the case when the non‐stationarity in the data is driven by a reduced number of common stochastic trends, and (ii) the case where we have common and idiosyncratic stochastic trends present in the data. We discuss the homogeneity restrictions on the cointegrating vectors resulting from the presence of common factor cointegration. Furthermore, we study the asymptotic behaviour of some existing residual‐based panel no‐cointegration tests, as suggested by Kao [Journal of Econometrics (1999), Vol. 90, pp. 1–44] and Pedroni [Econometric Theory (2004a), Vol. 20, pp. 597–625]. Under the data‐generating processes (DGP) used, the test statistics are no longer asymptotically normal, and convergence occurs at rate T rather than as for independent panels. We then examine the possibilities of testing for various forms of no‐cointegration by extracting the common factors and individual components from the observed data directly and then testing for no‐cointegration using residual‐based panel tests applied to the defactored data.  相似文献   

2.
In this paper, we introduce several test statistics testing the null hypothesis of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure and the trend. We derive analytical limiting distributions for all the tests. The power performance of the tests is compared with that of the unit‐root tests by Phillips and Perron [Biometrika (1988), Vol. 75, pp. 335–346], and Leybourne, Newbold and Vougas [Journal of Time Series Analysis (1998), Vol. 19, pp. 83–97]. In the presence of a gradual change in the deterministics and in the dynamics, our tests are superior in terms of power.  相似文献   

3.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

4.
Kim, Belaire‐Franch and Amador [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66] present different percentiles for the same mean score test statistic. We find that the difference by a factor 0.6 is due to systematically different sample analogues. Furthermore, we clarify which sample versions of the mean‐exponential test statistic should be correctly used with which set of critical values. At the same time, we correct some of the limiting distributions found in the literature.  相似文献   

5.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

6.
In this paper, we extend the heterogeneous panel data stationarity test of Hadri [Econometrics Journal, Vol. 3 (2000) pp. 148–161] to the cases where breaks are taken into account. Four models with different patterns of breaks under the null hypothesis are specified. Two of the models have been already proposed by Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175]. The moments of the statistics corresponding to the four models are derived in closed form via characteristic functions. We also provide the exact moments of a modified statistic that do not asymptotically depend on the location of the break point under the null hypothesis. The cases where the break point is unknown are also considered. For the model with breaks in the level and no time trend and for the model with breaks in the level and in the time trend, Carrion‐i‐Silvestre et al. [Econometrics Journal, Vol. 8 (2005) pp. 159–175] showed that the number of breaks and their positions may be allowed to differ across individuals for cases with known and unknown breaks. Their results can easily be extended to the proposed modified statistic. The asymptotic distributions of all the statistics proposed are derived under the null hypothesis and are shown to be normally distributed. We show by simulations that our suggested tests have in general good performance in finite samples except the modified test. In an empirical application to the consumer prices of 22 OECD countries during the period from 1953 to 2003, we found evidence of stationarity once a structural break and cross‐sectional dependence are accommodated.  相似文献   

7.
Feenstra and Hanson [NBER Working Paper No. 6052 (1997)] propose a procedure to correct the standard errors in a two‐stage regression with generated dependent variables. Their method has subsequently been used in two‐stage mandated wage models [Feenstra and Hanson, Quarterly Journal of Economics (1999) Vol. 114, pp. 907–940; Haskel and Slaughter, The Economic Journal (2001) Vol. 111, pp. 163–187; Review of International Economics (2003) Vol. 11, pp. 630–650] and for the estimation of the sector bias of skill‐biased technological change [Haskel and Slaughter, European Economic Review (2002) Vol. 46, pp. 1757–1783]. Unfortunately, the proposed correction is negatively biased (sometimes even resulting in negative estimated variances) and therefore leads to overestimation of the inferred significance. We present an unbiased correction procedure and apply it to the models reported by Feenstra and Hanson (1999) and Haskel and Slaughter (2002) .  相似文献   

8.
In empirical studies, the probit and logit models are often used without checks for their competing distributional specifications. It is also rare for econometric tests to be focused on this issue. Santos Silva [Journal of Applied Econometrics (2001 ), Vol. 16, pp. 577–597] is an important recent exception. By using the conditional moment test principle, we discuss a wide class of non‐nested tests that can easily be applied to detect the competing distributions for the binary response models. This class of tests includes the test of Santos Silva (2001 ) for the same task as a particular example and provides other useful alternatives. We also compare the performance of these tests by a Monte Carlo simulation.  相似文献   

9.
This note presents some properties of the stochastic unit‐root processes developed in Granger and Swanson [Journal of Econometrics (1997) Vol. 80, pp. 35–62] and Leybourne, McCabe and Tremayne [Journal of Business & Economic Statistics (1996) Vol. 14, pp. 435–446] that have not been or only implicitly discussed in the literature.  相似文献   

10.
The difference and system generalized method of moments (GMM) estimators are growing in popularity. As implemented in popular software, the estimators easily generate instruments that are numerous and, in system GMM, potentially suspect. A large instrument collection overfits endogenous variables even as it weakens the Hansen test of the instruments’ joint validity. This paper reviews the evidence on the effects of instrument proliferation, and describes and simulates simple ways to control it. It illustrates the dangers by replicating Forbes [American Economic Review (2000) Vol. 90, pp. 869–887] on income inequality and Levine et al. [Journal of Monetary Economics] (2000) Vol. 46, pp. 31–77] on financial sector development. Results in both papers appear driven by previously undetected endogeneity.  相似文献   

11.
In this article, we derive the local asymptotic power function of the unit root test proposed by Breitung [Journal of Econometrics (2002) Vol. 108, pp. 343–363]. Breitung's test is a non‐parametric test and is free of nuisance parameters. We compare the local power curve of the Breitungs’ test with that of the Dickey–Fuller test. This comparison is in fact a quantification of the loss of power that one has to accept when applying a non‐parametric test.  相似文献   

12.
We show that the minimal forward (reverse) recursive unit tests of Banerjee, Lumsdaine and Stock [Journal of Business and Economics Statistics (1992) Vol. 10, pp. 271–288] are consistent against the alternative of a change in persistence from I(0) to I(1) [I(1) to I(0)]. However, these statistics are also shown to diverge for series which are I(0) throughout. Consequently, a rejection by these tests does not necessarily imply a change in persistence. We propose a further test, based on the ratio of these statistics, which is consistent against changes either from I(0) to I(1), or vice versa, yet does not over‐reject against constant I(0) series. Consistent breakpoint estimators are proposed.  相似文献   

13.
Heteroskedasticity and autocorrelation consistent (HAC) estimation commonly involves the use of prewhitening filters based on simple autoregressive models. In such applications, small sample bias in the estimation of autoregressive coefficients is transmitted to the recolouring filter, leading to HAC variance estimates that can be badly biased. The present paper provides an analysis of these issues using asymptotic expansions and simulations. The approach we recommend involves the use of recursive demeaning procedures that mitigate the effects of small‐sample autoregressive bias. Moreover, a commonly used restriction rule on the prewhitening estimates (that first‐order autoregressive coefficient estimates, or largest eigenvalues, >0.97 be replaced by 0.97) adversely interferes with the power of unit‐root and [ Kwiatkowski, Phillips, Schmidt and Shin (1992) Journal of Econometrics, Vol. 54, pp. 159–178] (KPSS) tests. We provide a new boundary condition rule that improves the size and power properties of these tests. Some illustrations of the effects of these adjustments on the size and power of KPSS testing are given. Using prewhitened HAC estimates and the new boundary condition rule, the KPSS test is consistent, in contrast to KPSS testing that uses conventional prewhitened HAC estimates [ Lee, J. S. (1996) Economics Letters, Vol. 51, pp. 131–137].  相似文献   

14.
This paper addresses the concept of multicointegration in a panel data framework and builds upon the panel data cointegration procedures developed in Pedroni [Econometric Theory (2004), Vol. 20, pp. 597–625]. When individuals are either cross‐section independent, or cross‐section dependence can be removed by cross‐section demeaning, our approach can be applied to the wider framework of mixed I(2) and I(1) stochastic processes. The paper also deals with the issue of cross‐section dependence using approximate common‐factor models. Finite sample performance is investigated through Monte Carlo simulations. Finally, we illustrate the use of the procedure investigating an inventories, sales and production relationship for a panel of US industries.  相似文献   

15.
Dickey and Fuller [Econometrica (1981) Vol. 49, pp. 1057–1072] suggested unit‐root tests for an autoregressive model with a linear trend conditional on an initial observation. TPower of tests for unit roots in the presence of a linear trendightly different model with a random initial value in which nuisance parameters can easily be eliminated by an invariant reduction of the model. We show that invariance arguments can also be used when comparing power within a conditional model. In the context of the conditional model, the Dickey–Fuller test is shown to be more stringent than a number of unit‐root tests motivated by models with random initial value. The power of the Dickey–Fuller test can be improved by making assumptions to the initial value. The practitioner therefore has to trade‐off robustness and power, as assumptions about initial values are hard to test, but can give more power.  相似文献   

16.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

17.
In this paper, we study the degree of business cycle synchronization by means of a small sample version of the Harding and Pagan's [Journal of Econometrics (2006) Vol. 132, pp. 59–79] Generalized Method of Moment test. We show that the asymptotic version of the test gets increasingly distorted in small samples when the number of countries grows large. However, a block bootstrapped version of the test can remedy the size distortion when the time series length divided by the number of countries T/n is sufficiently large. Applying the technique to a number of business cycle proxies of developed economies, we are unable to reject the null hypothesis of a non‐zero common multivariate synchronization index for certain economically meaningful subsets of these countries.  相似文献   

18.
The paper describes two automatic model selection algorithms, RETINA and PcGets, briefly discussing how the algorithms work and what their performance claims are. RETINA's Matlab implementation of the code is explained, then the program is compared with PcGets on the data in Perez‐Amaral, Gallo and White (2005 , Econometric Theory, Vol. 21, pp. 262–277), ‘A Comparison of Complementary Automatic Modelling Methods: RETINA and PcGets’, and Hoover and Perez (1999 , Econometrics Journal, Vol. 2, pp. 167–191), ‘Data Mining Reconsidered: Encompassing and the General‐to‐specific Approach to Specification Search’. Monte Carlo simulation results assess the null and non‐null rejection frequencies of the RETINA and PcGets model selection algorithms in the presence of nonlinear functions.  相似文献   

19.
The inverse normal method, which is used to combine P‐values from a series of statistical tests, requires independence of single test statistics in order to obtain asymptotic normality of the joint test statistic. The paper discusses the modification by Hartung (1999, Biometrical Journal, Vol. 41, pp. 849–855) , which is designed to allow for a certain correlation matrix of the transformed P‐values. First, the modified inverse normal method is shown here to be valid with more general correlation matrices. Secondly, a necessary and sufficient condition for (asymptotic) normality is provided, using the copula approach. Thirdly, applications to panels of cross‐correlated time series, stationary as well as integrated, are considered. The behaviour of the modified inverse normal method is quantified by means of Monte Carlo experiments.  相似文献   

20.
In this article, we investigate the behaviour of stationarity tests proposed by Müller [Journal of Econometrics (2005) Vol. 128, pp. 195–213] and Harris et al. [Econometric Theory (2007) Vol. 23, pp. 355–363] with uncertainty over the trend and/or initial condition. As different tests are efficient for different magnitudes of local trend and initial condition, following Harvey et al. [Journal of Econometrics (2012) Vol. 169, pp. 188–195], we propose decision rule based on the rejection of null hypothesis for multiple tests. Additionally, we propose a modification of this decision rule, relying on additional information about the magnitudes of the local trend and/or the initial condition that is obtained through pre‐testing. The resulting modification has satisfactory size properties under both uncertainty types.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号