首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider pseudo-panel data models constructed from repeated cross sections in which the number of individuals per group is large relative to the number of groups and time periods. First, we show that, when time-invariant group fixed effects are neglected, the OLS estimator does not converge in probability to a constant but rather to a random variable. Second, we show that, while the fixed-effects (FE) estimator is consistent, the usual t statistic is not asymptotically normally distributed, and we propose a new robust t statistic whose asymptotic distribution is standard normal. Third, we propose efficient GMM estimators using the orthogonality conditions implied by grouping and we provide t tests that are valid even in the presence of time-invariant group effects. Our Monte Carlo results show that the proposed GMM estimator is more precise than the FE estimator and that our new t test has good size and is powerful.  相似文献   

2.
We consider the issue of cross-sectional aggregation in nonstationary and heterogeneous panels where each unit cointegrates. We derive asymptotic properties of the aggregate estimate, and necessary and sufficient conditions for cointegration to hold in the aggregate relationship. We then analyze the case when cointegration does not carry through the aggregation process, and we investigate whether the violation of the formal conditions for perfect aggregation can still lead to an aggregate equation that is observationally equivalent to a cointegrated relationship. We derive a measure of the degree of noncointegration of the aggregate relationship and we explore its asymptotic properties. We propose a valid bootstrap approximation of the test. A Monte Carlo exercise evaluates size and power properties of the bootstrap test.  相似文献   

3.
This paper addresses the issue of optimal inference for parameters that are partially identified in models with moment inequalities. There currently exists a variety of inferential methods for use in this setting. However, the question of choosing optimally among contending procedures is unresolved. In this paper, I first consider a canonical large deviations criterion for optimality and show that inference based on the empirical likelihood ratio statistic is optimal. Second, I introduce a new empirical likelihood bootstrap that provides a valid resampling method for moment inequality models and overcomes the implementation challenges that arise as a result of non-pivotal limit distributions. Lastly, I analyze the finite sample properties of the proposed framework using Monte Carlo simulations. The simulation results are encouraging.  相似文献   

4.
This paper proposes exact distribution-free permutation tests for the specification of a non-linear regression model against one or more possibly non-nested alternatives. The new tests may be validly applied to a wide class of models, including models with endogenous regressors and lag structures. These tests build on the well-known J test developed by Davidson and MacKinnon [1981. Several tests for model specification in the presence of alternative hypotheses. Econometrica 49, 781–793] and their exactness holds under broader assumptions than those underlying the conventional J test. The J-type test statistics are used with a randomization or Monte Carlo resampling technique which yields an exact and computationally inexpensive inference procedure. A simulation experiment confirms the theoretical results and also shows the performance of the new procedure under violations of the maintained assumptions. The test procedure developed is illustrated by an application to inflation dynamics.  相似文献   

5.
We investigate the behavior of various standard and modified FF, likelihood ratio (LRLR), and Lagrange multiplier (LMLM) tests in linear homoskedastic regressions, adapting an alternative asymptotic framework in which the number of regressors and possibly restrictions grows proportionately to the sample size. When the restrictions are not numerous, the rescaled classical test statistics are asymptotically chi-squared, irrespective of whether there are many or few regressors. However, when the restrictions are numerous, standard asymptotic versions of classical tests are invalid. We propose and analyze asymptotically valid versions of the classical tests, including those that are robust to the numerosity of regressors and restrictions. The local power of all asymptotically valid tests under consideration turns out to be equal. The “exact” FF test that appeals to critical values of the FF distribution is also asymptotically valid and robust to the numerosity of regressors and restrictions.  相似文献   

6.
A new way of computing the Tail Area Influence Function (TAIF) exactly is proposed and a new finite sample robustness measure, based on the TAIF, is introduced. The main properties of this robustness measure are also studied, for both finite and asymptotic sample sizes. Next, a very accurate approximation to the finite sample power function of a test is obtained; this is based on the TAIF plus an iterative procedure. The results are valid when there are no nuisance parameters.  相似文献   

7.
The technique of Monte Carlo (MC) tests [Dwass (1957, Annals of Mathematical Statistics 28, 181–187); Barnard (1963, Journal of the Royal Statistical Society, Series B 25, 294)] provides a simple method for building exact tests from statistics whose finite sample distribution is intractable but can be simulated (when no nuisance parameter is involved). We extend this method in two ways: first, by allowing for MC tests based on exchangeable possibly discrete test statistics; second, by generalizing it to statistics whose null distribution involves nuisance parameters [maximized MC (MMC) tests]. Simplified asymptotically justified versions of the MMC method are also proposed: these provide a simple way of improving standard asymptotics and dealing with nonstandard asymptotics.  相似文献   

8.
This paper studies estimation of panel cointegration models with cross-sectional dependence generated by unobserved global stochastic trends. The standard least squares estimator is, in general, inconsistent owing to the spuriousness induced by the unobservable I(1) trends. We propose two iterative procedures that jointly estimate the slope parameters and the stochastic trends. The resulting estimators are referred to respectively as CupBC (continuously-updated and bias-corrected) and the CupFM (continuously-updated and fully-modified) estimators. We establish their consistency and derive their limiting distributions. Both are asymptotically unbiased and (mixed) normal and permit inference to be conducted using standard test statistics. The estimators are also valid when there are mixed stationary and non-stationary factors, as well as when the factors are all stationary.  相似文献   

9.
The inverse normal method, which is used to combine P‐values from a series of statistical tests, requires independence of single test statistics in order to obtain asymptotic normality of the joint test statistic. The paper discusses the modification by Hartung (1999, Biometrical Journal, Vol. 41, pp. 849–855) , which is designed to allow for a certain correlation matrix of the transformed P‐values. First, the modified inverse normal method is shown here to be valid with more general correlation matrices. Secondly, a necessary and sufficient condition for (asymptotic) normality is provided, using the copula approach. Thirdly, applications to panels of cross‐correlated time series, stationary as well as integrated, are considered. The behaviour of the modified inverse normal method is quantified by means of Monte Carlo experiments.  相似文献   

10.
In this paper, we study the degree of business cycle synchronization by means of a small sample version of the Harding and Pagan's [Journal of Econometrics (2006) Vol. 132, pp. 59–79] Generalized Method of Moment test. We show that the asymptotic version of the test gets increasingly distorted in small samples when the number of countries grows large. However, a block bootstrapped version of the test can remedy the size distortion when the time series length divided by the number of countries T/n is sufficiently large. Applying the technique to a number of business cycle proxies of developed economies, we are unable to reject the null hypothesis of a non‐zero common multivariate synchronization index for certain economically meaningful subsets of these countries.  相似文献   

11.
This note provides a warning against careless use of the generalized method of moments (GMM) with time series data. We show that if time series follow non‐causal autoregressive processes, their lags are not valid instruments, and the GMM estimator is inconsistent. Moreover, endogeneity of the instruments may not be revealed by the J‐test of overidentifying restrictions that may be inconsistent and has, in general, low finite‐sample power. Our explicit results pertain to a simple linear regression, but they can easily be generalized. Our empirical results indicate that non‐causality is quite common among economic variables, making these problems highly relevant.  相似文献   

12.
Monte Carlo evidence has made it clear that asymptotic tests based on generalized method of moments (GMM) estimation have disappointing size. The problem is exacerbated when the moment conditions are serially correlated. Several block bootstrap techniques have been proposed to correct the problem, including Hall and Horowitz (1996) and Inoue and Shintani (2006). We propose an empirical likelihood block bootstrap procedure to improve inference where models are characterized by nonlinear moment conditions that are serially correlated of possibly infinite order. Combining the ideas of Kitamura (1997) and Brown and Newey (2002), the parameters of a model are initially estimated by GMM which are then used to compute the empirical likelihood probability weights of the blocks of moment conditions. The probability weights serve as the multinomial distribution used in resampling. The first-order asymptotic validity of the proposed procedure is proven, and a series of Monte Carlo experiments show it may improve test sizes over conventional block bootstrapping.  相似文献   

13.
A nonparametric, residual-based stationary bootstrap procedure is proposed for unit root testing in a time series. The procedure generates a pseudoseries which mimics the original, but ensures the presence of a unit root. Unlike many others in the literature, the proposed test is valid for a wide class of weakly dependent processes and is not based on parametric assumptions on the data-generating process. Large sample theory is developed and asymptotic validity is shown via a bootstrap functional central limit theorem. The case of a least squares statistic is discussed in detail, including simulations to investigate the procedure's finite sample performance.  相似文献   

14.
We propose a finite sample approach to some of the most common limited dependent variables models. The method rests on the maximized Monte Carlo (MMC) test technique proposed by Dufour [1998. Monte Carlo tests with nuisance parameters: a general approach to finite-sample inference and nonstandard asymptotics. Journal of Econometrics, this issue]. We provide a general way for implementing tests and confidence regions. We show that the decision rule associated with a MMC test may be written as a Mixed Integer Programming problem. The branch-and-bound algorithm yields a global maximum in finite time. An appropriate choice of the statistic yields a consistent test, while fulfilling the level constraint for any sample size. The technique is illustrated with numerical data for the logit model.  相似文献   

15.
This paper considers the problem of constructing confidence sets for the date of a single break in a linear time series regression. We establish analytically and by small sample simulation that the current standard method in econometrics for constructing such confidence intervals has a coverage rate far below nominal levels when breaks are of moderate magnitude. Given that breaks of moderate magnitude are a theoretically and empirically relevant phenomenon, we proceed to develop an appropriate alternative. We suggest constructing confidence sets by inverting a sequence of tests. Each of the tests maintains a specific break date under the null hypothesis, and rejects when a break occurs elsewhere. By inverting a certain variant of a locally best invariant test, we ensure that the asymptotic critical value does not depend on the maintained break date. A valid confidence set can hence be obtained by assessing which of the sequence of test statistics exceeds a single number.  相似文献   

16.
Perron [Perron, P., 1989. The great crash, the oil price shock and the unit root hypothesis. Econometrica 57, 1361–1401] introduced a variety of unit root tests that are valid when a break in the trend function of a time series is present. The motivation was to devise testing procedures that were invariant to the magnitude of the shift in level and/or slope. In particular, if a change is present it is allowed under both the null and alternative hypotheses. This analysis was carried under the assumption of a known break date. The subsequent literature aimed to devise testing procedures valid in the case of an unknown break date. However, in doing so, most of the literature and, in particular the commonly used test of Zivot and Andrews [Zivot, E., Andrews, D.W.K., 1992. Further evidence on the great crash, the oil price shock and the unit root hypothesis. Journal of Business and Economic Statistics 10, 251–270], assumed that if a break occurs, it does so only under the alternative hypothesis of stationarity. This is undesirable since (a) it imposes an asymmetric treatment when allowing for a break, so that the test may reject when the noise is integrated but the trend is changing; (b) if a break is present, this information is not exploited to improve the power of the test. In this paper, we propose a testing procedure that addresses both issues. It allows a break under both the null and alternative hypotheses and, when a break is present, the limit distribution of the test is the same as in the case of a known break date, thereby allowing increased power while maintaining the correct size. Simulation experiments confirm that our procedure offers an improvement over commonly used methods in small samples.  相似文献   

17.
This paper analyzes many weak moment asymptotics under the possibility of similar moments. The possibility of highly related moments arises when there are many of them. Knight and Fu (2000) designate the issue of similar regressors as the “nearly singular” design in the least squares case. In the nearly singular design, the sample variance converges to a singular limit term. However, Knight and Fu (2000) assume that on the nullspace of the limit term, the difference between the sample variance and the singular matrix converges in probability to a positive definite matrix when multiplied by an appropriate rate. We consider specifically Continuous Updating Estimator (CUE) with many weak moments under nearly singular design. We show that the nearly singular design affects the form of the limit of the many weak moment asymptotics that is introduced by Newey and Windmeijer (2009a). However, the estimator is still consistent and the Wald test has the standard χ2χ2 limit.  相似文献   

18.
This paper investigates the theoretical accuracy of the Barro [Barro, R.J., 1974. Are government bonds net wealth? Journal of Political Economy, 82, 1095–1117] debt neutrality proposition. We first identify a discrepancy between the transversality condition of a social planning problem and the one of altruistically linked overlapping generations. Then, this discrepancy is exploited to construct public debt policies which affect the competitive equilibrium allocation even when bequests are strictly positive in all periods: a violation of Ricardian equivalence.  相似文献   

19.
This paper proposes a new testing procedure for detecting error cross section dependence after estimating a linear dynamic panel data model with regressors using the generalised method of moments (GMM). The test is valid when the cross-sectional dimension of the panel is large relative to the time series dimension. Importantly, our approach allows one to examine whether any error cross section dependence remains after including time dummies (or after transforming the data in terms of deviations from time-specific averages), which will be the case under heterogeneous error cross section dependence. Finite sample simulation-based results suggest that our tests perform well, particularly the version based on the [Blundell, R., Bond, S., 1998. Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics 87, 115–143] system GMM estimator. In addition, it is shown that the system GMM estimator, based only on partial instruments consisting of the regressors, can be a reliable alternative to the standard GMM estimators under heterogeneous error cross section dependence. The proposed tests are applied to employment equations using UK firm data and the results show little evidence of heterogeneous error cross section dependence.  相似文献   

20.
《Journal of econometrics》2002,108(1):133-156
By combining two alternative formulations of a test statistic with two alternative resampling schemes we obtain four different bootstrap tests. In the context of static linear regression models two of these are shown to have serious size and power problems, whereas the remaining two are adequate and in fact equivalent. The equivalence between the two valid implementations is shown to break down in dynamic regression models. Then, the procedure based on the test statistic approach performs best, at least in the AR(1)-model. Similar finite-sample phenomena are illustrated in the ARMA(1,1)-model through a small-scale Monte Carlo study and an empirical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号