首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This note provides a warning against careless use of the generalized method of moments (GMM) with time series data. We show that if time series follow non‐causal autoregressive processes, their lags are not valid instruments, and the GMM estimator is inconsistent. Moreover, endogeneity of the instruments may not be revealed by the J‐test of overidentifying restrictions that may be inconsistent and has, in general, low finite‐sample power. Our explicit results pertain to a simple linear regression, but they can easily be generalized. Our empirical results indicate that non‐causality is quite common among economic variables, making these problems highly relevant.  相似文献   

2.
In the linear instrumental variables model, we provide theoretical and Monte Carlo evidence for the size distortion of a two‐stage hypothesis test that uses a test of overidentifying restrictions (OR) in the first stage. We derive a lower bound for the asymptotic size of the two‐stage test. The lower bound is given by the asymptotic size of a test that rejects the null hypothesis when two conditions are met: the test of OR used in the first stage does not reject and the test in the second stage rejects. This lower bound can be as large as 1 ? εP, where εP is the pretest nominal size, for a parameter space that allows for local non‐exogeneity of the instruments but rules out weak instruments. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
We propose a unit root test for panels with cross-sectional dependency. We allow general dependency structure among the innovations that generate data for each of the cross-sectional units. Each unit may have different sample size, and therefore unbalanced panels are also permitted in our framework. Yet, the test is asymptotically normal, and does not require any tabulation of the critical values. Our test is based on nonlinear IV estimation of the usual augmented Dickey–Fuller type regression for each cross-sectional unit, using as instruments nonlinear transformations of the lagged levels. The actual test statistic is simply defined as a standardized sum of individual IV t-ratios. We show in the paper that such a standardized sum of individual IV t-ratios has limit normal distribution as long as the panels have large individual time series observations and are asymptotically balanced in a very weak sense. We may have the number of cross-sectional units arbitrarily small or large. In particular, the usual sequential asymptotics, upon which most of the available asymptotic theories for panel unit root models heavily rely, are not required. Finite sample performance of our test is examined via a set of simulations, and compared with those of other commonly used panel unit root tests. Our test generally performs better than the existing tests in terms of both finite sample sizes and powers. We apply our nonlinear IV method to test for the purchasing power parity hypothesis in panels.  相似文献   

4.
This paper develops a modified version of the Sargan [Sargan, J.D., 1958. The estimation of economic relationships using instrumental variables. Econometrica 26 (3), 393-415] restrictions, and shows that it is numerically equivalent to the test statistic of Hahn and Hausman [Hahn, J., Hausman, J., 2002. A new specification test for the validity of instrumental variables. Econometrica 70 (1), 163-189] up to a sign. The modified Sargan test is constructed such that its asymptotic distribution under the null hypothesis of correct specification is standard normal when the number of instruments increases with the sample size. The equivalence result is useful in understanding what the Hahn-Hausman test detects and its power properties.  相似文献   

5.
This paper studies the Hodges and Lehmann (1956) optimality of tests in a general setup. The tests are compared by the exponential rates of growth to one of the power functions evaluated at a fixed alternative while keeping the asymptotic sizes bounded by some constant. We present two sets of sufficient conditions for a test to be Hodges–Lehmann optimal. These new conditions extend the scope of the Hodges–Lehmann optimality analysis to setups that cannot be covered by other conditions in the literature. The general result is illustrated by our applications of interest: testing for moment conditions and overidentifying restrictions. In particular, we show that (i) the empirical likelihood test does not necessarily satisfy existing conditions for optimality but does satisfy our new conditions; and (ii) the generalized method of moments (GMM) test and the generalized empirical likelihood (GEL) tests are Hodges–Lehmann optimal under mild primitive conditions. These results support the belief that the Hodges–Lehmann optimality is a weak asymptotic requirement.  相似文献   

6.
We propose new spanning tests that assess if the initial and additional assets share the economically meaningful cost and mean representing portfolios. We prove their asymptotic equivalence to existing tests under local alternatives. We also show that unlike two-step or iterated procedures, single-step methods such as continuously updated GMM yield numerically identical overidentifying restrictions test, so there is arguably a single spanning test. To prove these results, we extend optimal GMM inference to deal with singularities in the long run second moment matrix of the influence functions. Finally, we test for spanning using size and book-to-market sorted US stock portfolios.  相似文献   

7.
Nearly-Singular design relaxes the nonsingularity assumption of the limit weight matrix in GMM, and the nonsingularity of the limit variance matrix for the first order conditions in GEL. The sample versions of these matrices are nonsingular, but in large samples we assume these sample matrices converge to a singular matrix. This can result in size distortions for the overidentifying restrictions test and large bias for the estimators. This nearly-singular design may occur because of the similar instruments in these matrices. We derive the large sample theory for GMM and GEL estimators under nearly-singular design. The rate of convergence of the estimators is slower than root nn.  相似文献   

8.
This paper focuses on the estimation of a finite dimensional parameter in a linear model where the number of instruments is very large or infinite. In order to improve the small sample properties of standard instrumental variable (IV) estimators, we propose three modified IV estimators based on three different ways of inverting the covariance matrix of the instruments. These inverses involve a regularization or smoothing parameter. It should be stressed that no restriction on the number of instruments is needed and that all the instruments are used in the estimation. We show that the three estimators are asymptotically normal and attain the semiparametric efficiency bound. Higher-order analysis of the MSE reveals that the bias of the modified estimators does not depend on the number of instruments. Finally, we suggest a data-driven method for selecting the regularization parameter. Interestingly, our regularization techniques lead to a consistent nonparametric estimation of the optimal instrument.  相似文献   

9.
This paper proposes a testing strategy for the null hypothesis that a multivariate linear rational expectations (LRE) model may have a unique stable solution (determinacy) against the alternative of multiple stable solutions (indeterminacy). The testing problem is addressed by a misspecification-type approach in which the overidentifying restrictions test obtained from the estimation of the system of Euler equations of the LRE model through the generalized method of moments is combined with a likelihood-based test for the cross-equation restrictions that the model places on its reduced form solution under determinacy. The resulting test has no power against a particular class of indeterminate equilibria, hence the non rejection of the null hypothesis can not be interpreted conclusively as evidence of determinacy. On the other hand, this test (i) circumvents the nonstandard inferential problem generated by the presence of the auxiliary parameters that appear under indeterminacy and that are not identifiable under determinacy, (ii) does not involve inequality parametric restrictions and hence the use of nonstandard inference, (iii) is consistent against the dynamic misspecification of the LRE model, and (iv) is computationally simple. Monte Carlo simulations show that the suggested testing strategy delivers reasonable size coverage and power against dynamic misspecification in finite samples. An empirical illustration focuses on the determinacy/indeterminacy of a New Keynesian monetary business cycle model of the US economy.  相似文献   

10.
We consider the following problem. There is a structural equation of interest that contains an explanatory variable that theory predicts is endogenous. There are one or more instrumental variables that credibly are exogenous with regard to this structural equation, but which have limited explanatory power for the endogenous variable. Further, there is one or more potentially ‘strong’ instruments, which has much more explanatory power but which may not be exogenous. Hausman (1978) provided a test for the exogeneity of the second instrument when none of the instruments are weak. Here, we focus on how the standard Hausman test does in the presence of weak instruments using the Staiger–Stock asymptotics. It is natural to conjecture that the standard version of the Hausman test would be invalid in the weak instrument case, which we confirm. However, we provide a version of the Hausman test that is valid even in the presence of weak IV and illustrate how to implement the test in the presence of heteroskedasticity. We show that the situation we analyze occurs in several important economic examples. Our Monte Carlo experiments show that our procedure works relatively well in finite samples. We should note that our test is not consistent, although we believe that it is impossible to construct a consistent test with weak instruments.  相似文献   

11.
In this paper, tests for neglected heterogeneity and functional form misspecification of some commonly used parametric distributions are derived within a heterogeneous generalized gamma model. It is argued that the conventional test of heterogeneity may not be valid when the underlying hazard function is misspecified. Hence, if the estimated hazard function is deemed restrictive, tests for functional form misspecification should accompany any test of heterogeneity. An empirical illustration based on Kennan's (1985) model of strikes is used to show that incorrect inferences may be drawn, as in a number of previous analyses, if the relevant restrictions are not tested jointly.  相似文献   

12.
A simple econometric test for rational expectations in the case in which unobservable, rationally expected variables appear in a structural equation is presented. Using McCallum's instrumental variable estimator as a base, a test for rational expectations per se and a joint test of rational expectations and hypotheses about the structural equation are presented. The new test is shown to be a new interpretation of Basmann's test of overidentifying restrictions. As an illustration, the hypothesis that the forward exchange rate is the rationally expected future spot exchange rate is tested and rejected.  相似文献   

13.
This paper surveys the state of the art in the econometrics of regression models with many instruments or many regressors based on alternative – namely, dimension – asymptotics. We list critical results of dimension asymptotics that lead to better approximations of properties of familiar and alternative estimators and tests when the instruments and/or regressors are numerous. Then, we consider the problem of estimation and inference in the basic linear instrumental variables regression setup with many strong instruments. We describe the failures of conventional estimation and inference, as well as alternative tools that restore consistency and validity. We then add various other features to the basic model such as heteroskedasticity, instrument weakness, etc., in each case providing a review of the existing tools for proper estimation and inference. Subsequently, we consider a related but different problem of estimation and testing in a linear mean regression with many regressors. We also describe various extensions and connections to other settings, such as panel data models, spatial models, time series models, and so on. Finally, we provide practical guidance regarding which tools are most suitable to use in various situations when many instruments and/or regressors turn out to be an issue.  相似文献   

14.
In this paper, we develop two cointegration tests for two varying coefficient cointegration regression models, respectively. Our test statistics are residual based. We derive the asymptotic distributions of test statistics under the null hypothesis of cointegration and show that they are consistent against the alternative hypotheses. We also propose a wild bootstrap procedure companioned with the continuous moving block bootstrap method proposed in  Paparoditis and Politis (2001) and  Phillips (2010) to rectify severe distortions found in simulations when the sample size is small. We apply the proposed test statistic to examine the purchasing power parity (PPP) hypothesis between the US and Canada. In contrast to the existing results from linear cointegration tests, our varying coefficient cointegration test does not reject that PPP holds between the US and Canada.  相似文献   

15.
It is well-known that size adjustments based on bootstrapping the tt-statistic perform poorly when instruments are weakly correlated with the endogenous explanatory variable. In this paper, we provide a theoretical proof that guarantees the validity of the bootstrap for the score statistic. This theory does not follow from standard results, since the score statistic is not a smooth function of sample means and some parameters are not consistently estimable when the instruments are uncorrelated with the explanatory variable.  相似文献   

16.
Precedence-type tests based on order statistics are simple and efficient nonparametric tests that are very useful in the context of life-testing, and they have been studied quite extensively in the literature; see Balakrishnan and Ng (Precedence-type tests and applications. Wiley, Hoboken, 2006). In this paper, we consider precedence-type tests based on record values and develop specifically record precedence test, record maximal precedence test and record-rank-sum test. We derive their exact null distributions and tabulate some critical values. Then, under the general Lehmann alternative, we derive the exact power functions of these tests and discuss their power under the location-shift alternative. We also establish that the record precedence test is the uniformly most powerful test for testing against the one-parameter family of Lehmann alternatives. Finally, we discuss the situation when we have insufficient number of records to apply the record precedence test and then make some concluding remarks.  相似文献   

17.
Evaluating employee integrity: Moral and methodological problems   总被引:1,自引:0,他引:1  
This paper reviews the research on proprietary paper and pencial tests of integrity or honesty, which have effectively supplanted polygraph examinations in evaluating the moral attributes of employees and applicants. Moral integrity is a complex issue that encompasses more than conventional notions of honesty and is difficult to operationalize as a psychological trait or construct. Integrity test questions are largely derived from polygraph interrogations and the tests validated through polygraph results. The field studies reviewed and an exploratory test cast doubt on the ability of these paper and pencil instruments to meet standards of construct validity. Other studies show promise of predictive validity in some situations. Unfortunately, the research designs used to substantiate the predictive powers of integrity tests failed to hold other workplace influences constant. In light of these findings, employers are urged to exercise caution in the use of these tests until further independent research is reported because of potential infringements on privacy and equal opportunity.  相似文献   

18.
《Labour economics》2006,13(1):19-34
We use sibling data on wages, schooling, and aptitude test scores from the 1979 National Longitudinal Survey of Youth (NLSY79) to obtain OLS, family fixed effects, and fixed effects instrumental variable estimates of the return to schooling for a large sample of non-twin siblings. Following recent studies that use identical twin samples, we use sibling-reported schooling as an instrument for self-reported schooling. Controlling for aptitude test scores has a substantial impact on estimated returns to schooling even within families, and there is a large return to test scores that is comparable in size within and between families. We also find that the return to schooling is higher for older brothers than for younger brothers and for women than men. Finally, because the NLSY79 contains multiple sibling reports of education for the same individual, we are able to test and reject the overidentifying restrictions for the validity of sibling-reported schooling as an instrumental variable.  相似文献   

19.
Measurement error regression models are factor analysis models, the latent ‘correct’ regressors are the factors. There is however no common statistical method between the factor analysis and the regression model, because the covariance elements that are known identifying constituents of the former model are unknown in the latter. Instead, the idea that the data come from the same regression model, as with panel data, but can be grouped in two or more groups, each group having its own different regrressor generating process, is shown to supply credible restrictions. We generalize and compare relevant identifiability criteria and corresponding asymptotically efficient estimators that are recursive in the number of overidentifying restrictions.  相似文献   

20.
The paper considers the estimation of the coefficients of a single equation in the presence of dummy intruments. We derive pseudo ML and GMM estimators based on moment restrictions induced either by the structural form or by the reduced form of the model. The performance of the estimators is evaluated for the non-Gaussian case. We allow for heteroscedasticity. The asymptotic distributions are based on parameter sequences where the number of instruments increases at the same rate as the sample size. Relaxing the usual Gaussian assumption is shown to affect the normal asymptotic distributions. As a result also recently suggested new specification tests for the validity of instruments depend on Gaussianity. Monte Carlo simulations confirm the accuracy of the asymptotic approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号