首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
偏差修正的预白化HAC法在平稳过程伪回归中的应用   总被引:1,自引:0,他引:1  
在传统预白化HAC法存在有限样本偏差的基础上,提出将自回归参数偏差修正法和残差调整法来减少预白化HAC法的偏差,从而降低相互独立的平稳过程之间发生伪回归的概率。通过一系列的蒙特卡罗模拟表明:第一修正的预白化HAC法确实减少了伪回归概率,且自回归参数偏差修正法减少的幅度要比残差调整法要大得多;第二相对于同方差情形而言,存在GARCH类异方差的回归中预白化HAC法具有更低的伪回归概率;第三当数据过程是AR(2)过程时,在持久性相同的情况下预白化HAC法的伪回归概率要低于相应的AR(1)数据过程。但在高于2阶的自回归数据过程的回归中,残差调整的预白化HAC的伪回归概率具有优势。在样本容量较大(T≥500)时自回归参数修正的预白化HAC法的伪回归概率很接近检验水平,但残差调整的预白化HAC法具有微弱的向下检验水平扭曲.  相似文献   

2.
Feenstra and Hanson [NBER Working Paper No. 6052 (1997)] propose a procedure to correct the standard errors in a two‐stage regression with generated dependent variables. Their method has subsequently been used in two‐stage mandated wage models [Feenstra and Hanson, Quarterly Journal of Economics (1999) Vol. 114, pp. 907–940; Haskel and Slaughter, The Economic Journal (2001) Vol. 111, pp. 163–187; Review of International Economics (2003) Vol. 11, pp. 630–650] and for the estimation of the sector bias of skill‐biased technological change [Haskel and Slaughter, European Economic Review (2002) Vol. 46, pp. 1757–1783]. Unfortunately, the proposed correction is negatively biased (sometimes even resulting in negative estimated variances) and therefore leads to overestimation of the inferred significance. We present an unbiased correction procedure and apply it to the models reported by Feenstra and Hanson (1999) and Haskel and Slaughter (2002) .  相似文献   

3.
Dickey and Fuller [Econometrica (1981) Vol. 49, pp. 1057–1072] suggested unit‐root tests for an autoregressive model with a linear trend conditional on an initial observation. TPower of tests for unit roots in the presence of a linear trendightly different model with a random initial value in which nuisance parameters can easily be eliminated by an invariant reduction of the model. We show that invariance arguments can also be used when comparing power within a conditional model. In the context of the conditional model, the Dickey–Fuller test is shown to be more stringent than a number of unit‐root tests motivated by models with random initial value. The power of the Dickey–Fuller test can be improved by making assumptions to the initial value. The practitioner therefore has to trade‐off robustness and power, as assumptions about initial values are hard to test, but can give more power.  相似文献   

4.
Structural vector autoregressive (SVAR) models have emerged as a dominant research strategy in empirical macroeconomics, but suffer from the large number of parameters employed and the resulting estimation uncertainty associated with their impulse responses. In this paper, we propose general‐to‐specific (Gets) model selection procedures to overcome these limitations. It is shown that single‐equation procedures are generally efficient for the reduction of recursive SVAR models. The small‐sample properties of the proposed reduction procedure (as implemented using PcGets) are evaluated in a realistic Monte Carlo experiment. The impulse responses generated by the selected SVAR are found to be more precise and accurate than those of the unrestricted VAR. The proposed reduction strategy is then applied to the US monetary system considered by Christiano, Eichenbaum and Evans (Review of Economics and Statistics, Vol. 78, pp. 16–34, 1996) . The results are consistent with the Monte Carlo and question the validity of the impulse responses generated by the full system.  相似文献   

5.
In this paper, we develop a set of new persistence change tests which are similar in spirit to those of Kim [Journal of Econometrics (2000) Vol. 95, pp. 97–116], Kim et al. [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66]. While the exisiting tests are based on ratios of sub‐sample Kwiatkowski et al. [Journal of Econometrics (1992) Vol. 54, pp. 158–179]‐type statistics, our proposed tests are based on the corresponding functions of sub‐sample implementations of the well‐known maximal recursive‐estimates and re‐scaled range fluctuation statistics. Our statistics are used to test the null hypothesis that a time series displays constant trend stationarity [I(0)] behaviour against the alternative of a change in persistence either from trend stationarity to difference stationarity [I(1)], or vice versa. Representations for the limiting null distributions of the new statistics are derived and both finite‐sample and asymptotic critical values are provided. The consistency of the tests against persistence change processes is also demonstrated. Numerical evidence suggests that our proposed tests provide a useful complement to the extant persistence change tests. An application of the tests to US inflation rate data is provided.  相似文献   

6.
Lanne and Saikkonen [Oxford Bulletin of Economics and Statistics (2011a) Vol. 73, pp. 581–592], show that the generalized method of moments (GMM) estimator is inconsistent, when the instruments are lags of variables that admit a non‐causal autoregressive representation. This article argues that this inconsistency depends on distributional assumptions, that do not always hold. In particular under rational expectations, the GMM estimator is found to be consistent. This result is derived in a linear context and illustrated by simulation of a nonlinear asset pricing model.  相似文献   

7.
In this paper we construct output gap and inflation predictions using a variety of dynamic stochastic general equilibrium (DSGE) sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson [Journal of Econometrics (2005a), forthcoming] as well as predictive accuracy tests due to Diebold and Mariano [Journal of Business and Economic Statistics (1995) , Vol. 13, pp. 253–263]; and West [Econometrica (1996) , Vol. 64, pp. 1067–1084] are used to compare the alternative models. A number of simple time‐series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo [Journal of Monetary Economics (1983), Vol. XII, pp. 383–398] is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clear‐cut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, it performs relatively poorly at shorter horizons. When the strawman time‐series models are added to the picture, we find that the DSGE models still fare very well, often outperforming our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.  相似文献   

8.
We re‐examine studies of cross‐country growth regressions by Levine and Renelt (American Economic Review, Vol. 82, 1992, pp. 942–963) and Sala‐i‐Martin (American Economic Review, Vol. 87, 1997a, pp. 178–183; Economics Department, Columbia, University, 1997b). In a realistic Monte Carlo experiment, their variants of Edward Leamer's extreme‐bounds analysis are compared with a cross‐sectional version of the general‐to‐specific search methodology associated with the LSE approach to econometrics. Levine and Renelt's method has low size and low power, while Sala‐i‐Martin's method has high size and high power. The general‐to‐specific methodology is shown to have a near nominal size and high power. Sala‐i‐Martin's method and the general‐to‐specific method are then applied to the actual data from Sala‐i‐Martin's original study.  相似文献   

9.
In this paper, we introduce several test statistics testing the null hypothesis of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure and the trend. We derive analytical limiting distributions for all the tests. The power performance of the tests is compared with that of the unit‐root tests by Phillips and Perron [Biometrika (1988), Vol. 75, pp. 335–346], and Leybourne, Newbold and Vougas [Journal of Time Series Analysis (1998), Vol. 19, pp. 83–97]. In the presence of a gradual change in the deterministics and in the dynamics, our tests are superior in terms of power.  相似文献   

10.
In this article, we study the size distortions of the KPSS test for stationarity when serial correlation is present and samples are small‐ and medium‐sized. It is argued that two distinct sources of the size distortions can be identified. The first source is the finite‐sample distribution of the long‐run variance estimator used in the KPSS test, while the second source of the size distortions is the serial correlation not captured by the long‐run variance estimator because of a too narrow choice of truncation lag parameter. When the relative importance of the two sources is studied, it is found that the size of the KPSS test can be reasonably well controlled if the finite‐sample distribution of the KPSS test statistic, conditional on the time‐series dimension and the truncation lag parameter, is used. Hence, finite‐sample critical values, which can be applied to reduce the size distortions of the KPSS test, are supplied. When the power of the test is studied, it is found that the price paid for the increased size control is a lower raw power against a non‐stationary alternative hypothesis.  相似文献   

11.
In this article, we derive the local asymptotic power function of the unit root test proposed by Breitung [Journal of Econometrics (2002) Vol. 108, pp. 343–363]. Breitung's test is a non‐parametric test and is free of nuisance parameters. We compare the local power curve of the Breitungs’ test with that of the Dickey–Fuller test. This comparison is in fact a quantification of the loss of power that one has to accept when applying a non‐parametric test.  相似文献   

12.
This paper deals with the estimation of the long-run variance of a stationary sequence. We extend the usual Bartlett-kernel heteroskedasticity and autocorrelation consistent (HAC) estimator to deal with long memory and antipersistence. We then derive asymptotic expansions for this estimator and the memory and autocorrelation consistent (MAC) estimator introduced by Robinson [Robinson, P. M., 2005. Robust covariance matrix estimation: HAC estimates with long memory/antipersistence correction. Econometric Theory 21, 171–180]. We offer a theoretical explanation for the sensitivity of HAC to the bandwidth choice, a feature which has been observed in the special case of short memory. Using these analytical results, we determine the MSE-optimal bandwidth rates for each estimator. We analyze by simulations the finite-sample performance of HAC and MAC estimators, and the coverage probabilities for the studentized sample mean, giving practical recommendations for the choice of bandwidths.  相似文献   

13.
We introduce a modified conditional logit model that takes account of uncertainty associated with mis‐reporting in revealed preference experiments estimating willingness‐to‐pay (WTP). Like Hausman et al. [Journal of Econometrics (1988) Vol. 87, pp. 239–269], our model captures the extent and direction of uncertainty by respondents. Using a Bayesian methodology, we apply our model to a choice modelling (CM) data set examining UK consumer preferences for non‐pesticide food. We compare the results of our model with the Hausman model. WTP estimates are produced for different groups of consumers and we find that modified estimates of WTP, that take account of mis‐reporting, are substantially revised downwards. We find a significant proportion of respondents mis‐reporting in favour of the non‐pesticide option. Finally, with this data set, Bayes factors suggest that our model is preferred to the Hausman model.  相似文献   

14.
Panel unit‐root and no‐cointegration tests that rely on cross‐sectional independence of the panel unit experience severe size distortions when this assumption is violated, as has, for example, been shown by Banerjee, Marcellino and Osbat [Econometrics Journal (2004), Vol. 7, pp. 322–340; Empirical Economics (2005), Vol. 30, pp. 77–91] via Monte Carlo simulations. Several studies have recently addressed this issue for panel unit‐root tests using a common factor structure to model the cross‐sectional dependence, but not much work has been done yet for panel no‐cointegration tests. This paper proposes a model for panel no‐cointegration using an unobserved common factor structure, following the study by Bai and Ng [Econometrica (2004), Vol. 72, pp. 1127–1177] for panel unit roots. We distinguish two important cases: (i) the case when the non‐stationarity in the data is driven by a reduced number of common stochastic trends, and (ii) the case where we have common and idiosyncratic stochastic trends present in the data. We discuss the homogeneity restrictions on the cointegrating vectors resulting from the presence of common factor cointegration. Furthermore, we study the asymptotic behaviour of some existing residual‐based panel no‐cointegration tests, as suggested by Kao [Journal of Econometrics (1999), Vol. 90, pp. 1–44] and Pedroni [Econometric Theory (2004a), Vol. 20, pp. 597–625]. Under the data‐generating processes (DGP) used, the test statistics are no longer asymptotically normal, and convergence occurs at rate T rather than as for independent panels. We then examine the possibilities of testing for various forms of no‐cointegration by extracting the common factors and individual components from the observed data directly and then testing for no‐cointegration using residual‐based panel tests applied to the defactored data.  相似文献   

15.
This note presents some properties of the stochastic unit‐root processes developed in Granger and Swanson [Journal of Econometrics (1997) Vol. 80, pp. 35–62] and Leybourne, McCabe and Tremayne [Journal of Business & Economic Statistics (1996) Vol. 14, pp. 435–446] that have not been or only implicitly discussed in the literature.  相似文献   

16.
A dynamic multi-level factor model with possible stochastic time trends is proposed. In the model, long-range dependence and short memory dynamics are allowed in global and local common factors as well as model innovations. Estimation of global and local common factors is performed on the prewhitened series, for which the prewhitening parameter is estimated semiparametrically from the cross-sectional and local average of the observable series. Employing canonical correlation analysis and a sequential least-squares algorithm on the prewhitened series, the resulting multi-level factor estimates have centered asymptotic normal distributions under certain rate conditions depending on the bandwidth and cross-section size. Asymptotic results for common components are also established. The selection of the number of global and local factors is discussed. The methodology is shown to lead to good small-sample performance via Monte Carlo simulations. The method is then applied to the Nord Pool electricity market for the analysis of price comovements among different regions within the power grid. The global factor is identified to be the system price, and fractional cointegration relationships are found between local prices and the system price, motivating a long-run equilibrium relationship. Two forecasting exercises are then discussed.  相似文献   

17.
Samples with overlapping observations are used for the study of uncovered interest rate parity, the predictability of long‐run stock returns and the credibility of exchange rate target zones. This paper quantifies the biases in parameter estimation and size distortions of hypothesis tests of overlapping linear and polynomial autoregressions, which have been used in target‐zone applications. We show that both estimation bias and size distortions of hypothesis tests are generally larger, if the amount of overlap is larger, the sample size is smaller, and autoregressive root of the data‐generating process is closer to unity. In particular, the estimates are biased in a way that makes it more likely that the predictions of the Bertola–Svensson model will be supported. Size distortions of various tests also turn out to be substantial even when using a heteroskedasticity and autocorrelation‐consistent covariance matrix.  相似文献   

18.
Kim, Belaire‐Franch and Amador [Journal of Econometrics (2002) Vol. 109, pp. 389–392] and Busetti and Taylor [Journal of Econometrics (2004) Vol. 123, pp. 33–66] present different percentiles for the same mean score test statistic. We find that the difference by a factor 0.6 is due to systematically different sample analogues. Furthermore, we clarify which sample versions of the mean‐exponential test statistic should be correctly used with which set of critical values. At the same time, we correct some of the limiting distributions found in the literature.  相似文献   

19.
This paper proposes a test of the null hypothesis of stationarity that is robust to the presence of fat-tailed errors. The test statistic is a modified version of the so-called KPSS statistic. The modified statistic uses the “sign” of the data minus the sample median, whereas KPSS used deviations from means. This “indicator” KPSS statistic has the same limit distribution as the standard KPSS statistic under the null, without relying on assumptions about moments, but a different limit distribution under unit root alternatives. The indicator test has lower power than standard KPSS when tails are thin, but higher power when tails are fat.  相似文献   

20.
In this article, we investigate the behaviour of stationarity tests proposed by Müller [Journal of Econometrics (2005) Vol. 128, pp. 195–213] and Harris et al. [Econometric Theory (2007) Vol. 23, pp. 355–363] with uncertainty over the trend and/or initial condition. As different tests are efficient for different magnitudes of local trend and initial condition, following Harvey et al. [Journal of Econometrics (2012) Vol. 169, pp. 188–195], we propose decision rule based on the rejection of null hypothesis for multiple tests. Additionally, we propose a modification of this decision rule, relying on additional information about the magnitudes of the local trend and/or the initial condition that is obtained through pre‐testing. The resulting modification has satisfactory size properties under both uncertainty types.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号