首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
In this paper, we propose a simple extension to the panel case of the covariate‐augmented Dickey–Fuller (CADF) test for unit roots developed in Hansen (1995) . The panel test we propose is based on a P values combination approach that takes into account cross‐section dependence. We show that the test has good size properties and gives power gains with respect to other popular panel approaches. An empirical application is carried out for illustration purposes on international data to test the purchasing power parity (PPP) hypothesis.  相似文献   

2.
This article presents a formal explanation of the forecast combination puzzle, that simple combinations of point forecasts are repeatedly found to outperform sophisticated weighted combinations in empirical applications. The explanation lies in the effect of finite‐sample error in estimating the combining weights. A small Monte Carlo study and a reappraisal of an empirical study by Stock and Watson [Federal Reserve Bank of Richmond Economic Quarterly (2003) Vol. 89/3, pp. 71–90] support this explanation. The Monte Carlo evidence, together with a large‐sample approximation to the variance of the combining weight, also supports the popular recommendation to ignore forecast error covariances in estimating the weight.  相似文献   

3.
This paper considers estimation of factor‐augmented panel data regression models. One of the most popular approaches towards this end is the common correlated effects (CCE) estimator of Pesaran (Estimation and inference in large heterogeneous panels with a multifactor error structure. Econometrica, 2006, 74, 967–1012, 2006). For the pooled version of this estimator to be consistent, either the number of observables must be larger than the number of unobserved common factors, or the factor loadings must be distributed independently of each other. This is a problem in the typical application involving only a small number of regressors and/or correlated loadings. The current paper proposes a simple extension to the CCE procedure by which both requirements can be relaxed. The CCE approach is based on taking the cross‐section average of the observables as an estimator of the common factors. The idea put forth in the current paper is to consider not only the average but also other cross‐section combinations. Asymptotic properties of the resulting combination‐augmented CCE (C3E) estimator are provided and tested in small samples using both simulated and real data.  相似文献   

4.
Within the inferential context of predicting a distribution of potential outcomes P[y(t)] under a uniform treatment assignment tT, this paper deals with partial identification of the α‐quantile of the distribution of interest Qα[y(t)] under relatively weak and credible monotonicity‐type assumptions on the individual response functions and the population selection process. On the theoretical side, the paper adds to the existing results on non‐parametric bounds on quantiles with no prior information and under monotone treatment response (MTR) by introducing and studying the identifying properties of α‐quantile monotone treatment selection (α‐QMTS), α‐quantile monotone instrumental variables (α‐QMIV) and their combinations. The main result parallels that for the mean; MTR and α‐QMTS aid identification in a complementary fashion, so that combining them greatly increases identification power. The theoretical results are illustrated through an empirical application on the Italian returns to educational qualifications. Bounds on several quantiles of ln(wage) under different qualifications and on quantile treatments effects (QTE) are estimated and compared with parametric quantile regression (α‐QR) and α‐IVQR estimates from the same sample. Remarkably, the α‐QMTS & MTR upper bounds on the α‐QTE of a college degree versus elementary education imply smaller year‐by‐year returns than the corresponding α‐IVQR point estimates. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
This article considers the problem of testing for cross‐section independence in limited dependent variable panel data models. It derives a Lagrangian multiplier (LM) test and shows that in terms of generalized residuals of Gourieroux et al. (1987) it reduces to the LM test of Breusch and Pagan (1980) . Because of the tendency of the LM test to over‐reject in panels with large N (cross‐section dimension), we also consider the application of the cross‐section dependence test (CD) proposed by Pesaran (2004) . In Monte Carlo experiments it emerges that for most combinations of N and T the CD test is correctly sized, whereas the validity of the LM test requires T (time series dimension) to be quite large relative to N. We illustrate the cross‐sectional independence tests with an application to a probit panel data model of roll‐call votes in the US Congress and find that the votes display a significant degree of cross‐section dependence.  相似文献   

6.
Dickey and Fuller [Econometrica (1981) Vol. 49, pp. 1057–1072] suggested unit‐root tests for an autoregressive model with a linear trend conditional on an initial observation. TPower of tests for unit roots in the presence of a linear trendightly different model with a random initial value in which nuisance parameters can easily be eliminated by an invariant reduction of the model. We show that invariance arguments can also be used when comparing power within a conditional model. In the context of the conditional model, the Dickey–Fuller test is shown to be more stringent than a number of unit‐root tests motivated by models with random initial value. The power of the Dickey–Fuller test can be improved by making assumptions to the initial value. The practitioner therefore has to trade‐off robustness and power, as assumptions about initial values are hard to test, but can give more power.  相似文献   

7.
We re‐examine studies of cross‐country growth regressions by Levine and Renelt (American Economic Review, Vol. 82, 1992, pp. 942–963) and Sala‐i‐Martin (American Economic Review, Vol. 87, 1997a, pp. 178–183; Economics Department, Columbia, University, 1997b). In a realistic Monte Carlo experiment, their variants of Edward Leamer's extreme‐bounds analysis are compared with a cross‐sectional version of the general‐to‐specific search methodology associated with the LSE approach to econometrics. Levine and Renelt's method has low size and low power, while Sala‐i‐Martin's method has high size and high power. The general‐to‐specific methodology is shown to have a near nominal size and high power. Sala‐i‐Martin's method and the general‐to‐specific method are then applied to the actual data from Sala‐i‐Martin's original study.  相似文献   

8.
The well‐known lack of power of unit‐root tests has often been attributed to the short length of macroeconomic variables and also to data‐generating processes (DGPs) departing from the I(1)–I(0) models. This paper shows that by using long spans of annual real gross national product (GNP) and GNP per capita (133 years), high power can be achieved, leading to the rejection of both the unit‐root and the trend‐stationary hypothesis. More flexible representations are then considered, namely, processes containing structural breaks (SB) and fractional orders of integration (FI). Economic justification for the presence of these features in GNP is provided. It is shown that both FI and SB formulations are in general preferred to the autoregressive integrated moving average (ARIMA) [I(1) or I(0)] formulations. As a novelty in this literature, new techniques are applied to discriminate between FI and SB. It turns out that the FI specification is preferred, implying that GNP and GNP per capita are non‐stationary, highly persistent but mean‐reverting series. Finally, it is shown that the results are robust when breaks in the deterministic component are allowed for in the FI model. Some macroeconomic implications of these findings are also discussed.  相似文献   

9.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

10.
Hinkley (1977) derived two tests for testing the mean of a normal distribution with known coefficient of variation (c.v.) for right alternatives. They are the locally most powerful (LMP) and the conditional tests based on the ancillary statistic for μ. In this paper, the likelihood ratio (LR) and Wald tests are derived for the one‐ and two‐sided alternatives, as well as the two‐sided version of the LMP test. The performances of these tests are compared with those of the classical t, sign and Wilcoxon signed rank tests. The latter three tests do not use the information on c.v. Normal approximation is used to approximate the null distribution of the test statistics except for the t test. Simulation results indicate that all the tests maintain the type‐I error rates, that is, the attained level is close to the nominal level of significance of the tests. The power functions of the tests are estimated through simulation. The power comparison indicates that for one‐sided alternatives the LMP test is the best test whereas for the two‐sided alternatives the LR or the Wald test is the best test. The t, sign and Wilcoxon signed rank tests have lower power than the LMP, LR and Wald tests at various alternative values of μ. The power difference is quite large in several simulation configurations. Further, it is observed that the t, sign and Wilcoxon signed rank tests have considerably lower power even for the alternatives which are far away from the null hypothesis when the c.v. is large. To study the sensitivity of the tests for the violation of the normality assumption, the type I error rates are estimated on the observations of lognormal, gamma and uniform distributions. The newly derived tests maintain the type I error rates for moderate values of c.v.  相似文献   

11.
In this article, we derive the local asymptotic power function of the unit root test proposed by Breitung [Journal of Econometrics (2002) Vol. 108, pp. 343–363]. Breitung's test is a non‐parametric test and is free of nuisance parameters. We compare the local power curve of the Breitungs’ test with that of the Dickey–Fuller test. This comparison is in fact a quantification of the loss of power that one has to accept when applying a non‐parametric test.  相似文献   

12.
In this paper, we introduce several test statistics testing the null hypothesis of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure and the trend. We derive analytical limiting distributions for all the tests. The power performance of the tests is compared with that of the unit‐root tests by Phillips and Perron [Biometrika (1988), Vol. 75, pp. 335–346], and Leybourne, Newbold and Vougas [Journal of Time Series Analysis (1998), Vol. 19, pp. 83–97]. In the presence of a gradual change in the deterministics and in the dynamics, our tests are superior in terms of power.  相似文献   

13.
Two different approaches intend to resolve the ‘puzzling’ slow convergence to purchasing power parity (PPP) reported in the literature [see Rogoff (1996) , Journal of Economic Literature, Vol. 34.] On the one hand, there are models that consider a non‐linear adjustment of real exchange rate to PPP induced by transaction costs. Such costs imply the presence of a certain transaction band where adjustment is too costly to be undertaken. On the other hand, there are models that relax the ‘classical’ PPP assumption of constant equilibrium real exchange rates. A prominent theory put together by Balassa (1964, Journal of Political Economy, Vol. 72) and Samuelson (1964 Review of Economics and Statistics, Vol. 46) , the BS effect, suggests that a non‐constant real exchange rate equilibrium is induced by different productivity growth rates between countries. This paper reconciles those two approaches by considering an exponential smooth transition‐in‐deviation non‐linear adjustment mechanism towards non‐constant equilibrium real exchange rates within the EMS (European Monetary System) and effective rates. The equilibrium is proxied, in a theoretically appealing manner, using deterministic trends and the relative price of non‐tradables to proxy for BS effects. The empirical results provide further support for the hypothesis that real exchange rates are well described by symmetric, nonlinear processes. Furthermore, the half‐life of shocks in such models is found to be dramatically shorter than that obtained in linear models.  相似文献   

14.
In systems industries, combinations of components are consumed together to generate user benefits. Arrangements among component providers sometimes limit consumers’ ability to mix‐and‐match components, and such exclusive arrangements have been highly controversial. We examine the competitive and welfare effects of exclusive arrangements among system components in a model of relatively differentiated applications that run on relatively undifferentiated platforms. We show that there is no “One‐Market‐Power‐Rent Theorem.” Specifically, exclusive deals with providers of differentiated applications can raise platforms’ margins without reducing applications’ margins, so that overall industry profits rise. Hence, for a given set of components and prices, exclusive arrangements can reduce consumer welfare by limiting consumer choice and raising equilibrium prices. In some cases, however, exclusivity can raise consumer welfare by increasing the equilibrium number of platforms, which leads to lower prices relative to the monopoly outcome that would prevail absent exclusivity.  相似文献   

15.
This paper extends the cross‐sectionally augmented panel unit‐root test (CIPS) developed by Pesaran et al. (2013, Journal of Econometrics, Vol. 175, pp. 94–115) to allow for smoothing structural changes in deterministic terms modelled by a Fourier function. The proposed statistic is called the break augmented CIPS (BCIPS) statistic. We show that the non‐standard limiting distribution of the (truncated) BCIPS statistic exists and tabulate its critical values. Monte‐Carlo experiments point out that the sizes and powers of the BCIPS statistic are generally satisfactory as long as the number of time periods, T, is not less than fifty. The BCIPS test is then applied to examine the validity of long‐run purchasing power parity.  相似文献   

16.
In this paper we develop a model for the conditional inflated multivariate density of integer count variables with domain ?n, n?. Our modelling framework is based on a copula approach and can be used for a broad set of applications where the primary characteristics of the data are: (i) discrete domain; (ii) the tendency to cluster at certain outcome values; and (iii) contemporaneous dependence. These kinds of properties can be found for high‐ or ultra‐high‐frequency data describing the trading process on financial markets. We present a straightforward sampling method for such an inflated multivariate density through the application of an independence Metropolis–Hastings sampling algorithm. We demonstrate the power of our approach by modelling the conditional bivariate density of bid and ask quote changes in a high‐frequency setup. We show how to derive the implied conditional discrete density of the bid–ask spread, taking quote clusterings (at multiples of 5 ticks) into account. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
Robust methods for instrumental variable inference have received considerable attention recently. Their analysis has raised a variety of problematic issues such as size/power trade‐offs resulting from weak or many instruments. We show that information reduction methods provide a useful and practical solution to this and related problems. Formally, we propose factor‐based modifications to three popular weak‐instrument‐robust statistics, and illustrate their validity asymptotically and in finite samples. Results are derived using asymptotic settings that are commonly used in both the factor and weak‐instrument literature. For the Anderson–Rubin statistic, we also provide analytical finite‐sample results that do not require any underlying factor structure. An illustrative Monte Carlo study reveals the following. Factor‐based tests control size regardless of instruments and factor quality. All factor‐based tests are systematically more powerful than standard counterparts. With informative instruments and in contrast to standard tests: (i) power of factor‐based tests is not affected by k even when large; and (ii) weak factor structure does not cost power. An empirical study on a New Keynesian macroeconomic model suggests that our factor‐based methods can bridge a number of gaps between structural and statistical modeling. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Heteroskedasticity and autocorrelation consistent (HAC) estimation commonly involves the use of prewhitening filters based on simple autoregressive models. In such applications, small sample bias in the estimation of autoregressive coefficients is transmitted to the recolouring filter, leading to HAC variance estimates that can be badly biased. The present paper provides an analysis of these issues using asymptotic expansions and simulations. The approach we recommend involves the use of recursive demeaning procedures that mitigate the effects of small‐sample autoregressive bias. Moreover, a commonly used restriction rule on the prewhitening estimates (that first‐order autoregressive coefficient estimates, or largest eigenvalues, >0.97 be replaced by 0.97) adversely interferes with the power of unit‐root and [ Kwiatkowski, Phillips, Schmidt and Shin (1992) Journal of Econometrics, Vol. 54, pp. 159–178] (KPSS) tests. We provide a new boundary condition rule that improves the size and power properties of these tests. Some illustrations of the effects of these adjustments on the size and power of KPSS testing are given. Using prewhitened HAC estimates and the new boundary condition rule, the KPSS test is consistent, in contrast to KPSS testing that uses conventional prewhitened HAC estimates [ Lee, J. S. (1996) Economics Letters, Vol. 51, pp. 131–137].  相似文献   

19.
20.
With cointegration tests often being oversized under time‐varying error variance, it is possible, if not likely, to confuse error variance non‐stationarity with cointegration. This paper takes an instrumental variable (IV) approach to establish individual‐unit test statistics for no cointegration that are robust to variance non‐stationarity. The sign of a fitted departure from long‐run equilibrium is used as an instrument when estimating an error‐correction model. The resulting IV‐based test is shown to follow a chi‐square limiting null distribution irrespective of the variance pattern of the data‐generating process. In spite of this, the test proposed here has, unlike previous work relying on instrumental variables, competitive local power against sequences of local alternatives in 1/T‐neighbourhoods of the null. The standard limiting null distribution motivates, using the single‐unit tests in a multiple testing approach for cointegration in multi‐country data sets by combining P‐values from individual units. Simulations suggest good performance of the single‐unit and multiple testing procedures under various plausible designs of cross‐sectional correlation and cross‐unit cointegration in the data. An application to the equilibrium relationship between short‐ and long‐term interest rates illustrates the dramatic differences between results of robust and non‐robust tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号