首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 637 毫秒
1.
In this paper, we propose several finite‐sample specification tests for multivariate linear regressions (MLR). We focus on tests for serial dependence and ARCH effects with possibly non‐Gaussian errors. The tests are based on properly standardized multivariate residuals to ensure invariance to error covariances. The procedures proposed provide: (i) exact variants of standard multivariate portmanteau tests for serial correlation as well as ARCH effects, and (ii) exact versions of the diagnostics presented by Shanken ( 1990 ) which are based on combining univariate specification tests. Specifically, we combine tests across equations using a Monte Carlo (MC) test method so that Bonferroni‐type bounds can be avoided. The procedures considered are evaluated in a simulation experiment: the latter shows that standard asymptotic procedures suffer from serious size problems, while the MC tests suggested display excellent size and power properties, even when the sample size is small relative to the number of equations, with normal or Student‐t errors. The tests proposed are applied to the Fama–French three‐factor model. Our findings suggest that the i.i.d. error assumption provides an acceptable working framework once we allow for non‐Gaussian errors within 5‐year sub‐periods, whereas temporal instabilities clearly plague the full‐sample dataset. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

3.
Microeconomic data often have within‐cluster dependence, which affects standard error estimation and inference. When the number of clusters is small, asymptotic tests can be severely oversized. In the instrumental variables (IV) model, the potential presence of weak instruments further complicates hypothesis testing. We use wild bootstrap methods to improve inference in two empirical applications with these characteristics. Building from estimating equations and residual bootstraps, we identify variants robust to the presence of weak instruments and a small number of clusters. They reduce absolute size bias significantly and demonstrate that the wild bootstrap should join the standard toolkit in IV and cluster‐dependent models.  相似文献   

4.
We consider the following problem. There is a structural equation of interest that contains an explanatory variable that theory predicts is endogenous. There are one or more instrumental variables that credibly are exogenous with regard to this structural equation, but which have limited explanatory power for the endogenous variable. Further, there is one or more potentially ‘strong’ instruments, which has much more explanatory power but which may not be exogenous. Hausman (1978) provided a test for the exogeneity of the second instrument when none of the instruments are weak. Here, we focus on how the standard Hausman test does in the presence of weak instruments using the Staiger–Stock asymptotics. It is natural to conjecture that the standard version of the Hausman test would be invalid in the weak instrument case, which we confirm. However, we provide a version of the Hausman test that is valid even in the presence of weak IV and illustrate how to implement the test in the presence of heteroskedasticity. We show that the situation we analyze occurs in several important economic examples. Our Monte Carlo experiments show that our procedure works relatively well in finite samples. We should note that our test is not consistent, although we believe that it is impossible to construct a consistent test with weak instruments.  相似文献   

5.
This note provides a warning against careless use of the generalized method of moments (GMM) with time series data. We show that if time series follow non‐causal autoregressive processes, their lags are not valid instruments, and the GMM estimator is inconsistent. Moreover, endogeneity of the instruments may not be revealed by the J‐test of overidentifying restrictions that may be inconsistent and has, in general, low finite‐sample power. Our explicit results pertain to a simple linear regression, but they can easily be generalized. Our empirical results indicate that non‐causality is quite common among economic variables, making these problems highly relevant.  相似文献   

6.
Instrumental variable (IV) methods for regression are well established. More recently, methods have been developed for statistical inference when the instruments are weakly correlated with the endogenous regressor, so that estimators are biased and no longer asymptotically normally distributed. This paper extends such inference to the case where two separate samples are used to implement instrumental variables estimation. We also relax the restrictive assumptions of homoskedastic error structure and equal moments of exogenous covariates across two samples commonly employed in the two‐sample IV literature for strong IV inference. Monte Carlo experiments show good size properties of the proposed tests regardless of the strength of the instruments. We apply the proposed methods to two seminal empirical studies that adopt the two‐sample IV framework.  相似文献   

7.
In two recent papers Enders and Lee (2009) and Becker, Enders and Lee (2006) provide Lagrange multiplier and ordinary least squares de‐trended unit root tests, and stationarity tests, respectively, which incorporate a Fourier approximation element in the deterministic component. Such an approach can prove useful in providing robustness against a variety of breaks in the deterministic trend function of unknown form and number. In this article, we generalize the unit root testing procedure based on local generalized least squares (GLS) de‐trending proposed by Elliott, Rothenberg and Stock (1996) to allow for a Fourier approximation to the unknown deterministic component in the same way. We show that the resulting unit root tests possess good finite sample size and power properties and the test statistics have stable non‐standard distributions, despite the curious result that their limiting null distributions exhibit asymptotic rank deficiency.  相似文献   

8.
In this paper, we use Monte Carlo (MC) testing techniques for testing linearity against smooth transition models. The MC approach allows us to introduce a new test that differs in two respects from the tests existing in the literature. First, the test is exact in the sense that the probability of rejecting the null when it is true is always less than or equal to the nominal size of the test. Secondly, the test is not based on an auxiliary regression obtained by replacing the model under the alternative by approximations based on a Taylor expansion. We also apply MC testing methods for size correcting the test proposed by Luukkonen, Saikkonen and Teräsvirta (Biometrika, Vol. 75, 1988, p. 491). The results show that the power loss implied by the auxiliary regression‐based test is non‐existent compared with a supremum‐based test but is more substantial when compared with the three other tests under consideration.  相似文献   

9.
Members' shares in co‐operative entities are financial instruments with particular characteristics. In this paper we analyse the relation between firm leverage and systematic risk to provide empirical evidence on the economic substance of the member shares of members of cooperatives. We have studied the characteristics of members' shares in six European countries: France, Germany, Italy, Portugal, Spain and United Kingdom. We have also conducted tests on co‐operatives of these countries over the period 1993–2005. The study reports that in global terms the economic substance of the redeemable part of equity in co‐operatives is not the same across countries. Therefore if accounting standards setters want to develop a global standard for co‐operatives, a recommendation derived from this study would be to follow a probabilistic model to classify the redeemable part of co‐operative financial instruments, where the entity does not have the unconditional right to refuse the redemption, or to report this part as an intermediate item with characteristics of debt and equity.  相似文献   

10.
The presence of weak instruments is translated into a nearly singular problem in a control function representation. Therefore, the ‐norm type of regularization is proposed to implement the 2SLS estimation for addressing the weak instrument problem. The ‐norm regularization with a regularized parameter O(n) allows us to obtain the Rothenberg (1984) type of higher‐order approximation of the 2SLS estimator in the weak instrument asymptotic framework. The proposed regularized parameter yields the regularized concentration parameter O(n), which is used as a standardized factor in the higher‐order approximation. We also show that the proposed ‐norm regularization consequently reduces the finite sample bias. A number of existing estimators that address finite sample bias in the presence of weak instruments, especially Fuller's limited information maximum likelihood estimator, are compared with our proposed estimator in a simple Monte Carlo exercise.  相似文献   

11.
In this article, we investigate the behaviour of a number of methods for estimating the co‐integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) , with the commonly used sequential approach of Johansen [Likelihood‐based Inference in Cointegrated Vector Autoregressive Models (1996)] based around the use of either asymptotic or wild bootstrap‐based likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finite‐sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC‐based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct co‐integration rank across different values of the co‐integration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these, the wild bootstrap procedure is perhaps the more reliable overall as it avoids a significant tendency seen in the BIC‐based method to over‐estimate the co‐integration rank in relatively small sample sizes.  相似文献   

12.
Survey statisticians use either approximate or optimisation‐based methods to stratify finite populations. Examples of the former are the cumrootf (Dalenius & Hodges, 1957 ) and geometric (Gunning & Horgan, 2004 ) methods, while examples of the latter are Sethi ( 1963 ) and Kozak ( 2004 ) algorithms. The approximate procedures result in inflexible stratum boundaries; this lack of flexibility results in non‐optimal boundaries. On the other hand, optimisation‐based methods provide stratum boundaries that can simultaneously account for (i) a chosen allocation scheme, (ii) overall sample size or required reliability of the estimator of a studied parameter and (iii) presence or absence of a take‐all stratum. Given these additional conditions, optimisation‐based methods will result in optimal boundaries. The only disadvantage of these methods is their complexity. However, in the second decade of 21st century, this complexity does not actually pose a problem. We illustrate how these two groups of methods differ by comparing their efficiency for two artificial populations and a real population. Our final point is that statistical offices should prefer optimisation‐based over approximate stratification methods; such a decision will help them either save much public money or, if funds are already allocated to a survey, result in more precise estimates of national statistics.  相似文献   

13.
Incomplete correlated 2 × 2 tables are common in some infectious disease studies and two‐step treatment studies in which one of the comparative measures of interest is the risk ratio (RR). This paper investigates the two‐stage tests of whether K RRs are homogeneous and whether the common RR equals a freewill constant. On the assumption that K RRs are equal, this paper proposes four asymptotic test statistics: the Wald‐type, the logarithmic‐transformation‐based, the score‐type and the likelihood ratio statistics to test whether the common RR equals a prespecified value. Sample size formulae based on hypothesis testing method and confidence interval method are proposed in the second stage of test. Simulation results show that sample sizes based on the score‐type test and the logarithmic‐transformation‐based test are more accurate to achieve the predesigned power than those based on the Wald‐type test. The score‐type test performs best of the four tests in terms of type I error rate. A real example is used to illustrate the proposed methods.  相似文献   

14.
This paper studies the ability of a general class of habit‐based asset pricing models to match the conditional moment restrictions implied by asset pricing theory. We treat the functional form of the habit as unknown, and estimate it along with the rest of the model's finite dimensional parameters. Using quarterly data on consumption growth, assets returns and instruments, our empirical results indicate that the estimated habit function is nonlinear, that habit formation is better described as internal rather than external, and the estimated time‐preference parameter and the power utility parameter are sensible. In addition, the estimated habit function generates a positive stochastic discount factor (SDF) proxy and performs well in explaining cross‐sectional stock return data. We find that an internal habit SDF proxy can explain a cross‐section of size and book‐market sorted portfolio equity returns better than (i) the Fama and French ( 1993 ) three‐factor model, (ii) the Lettau and Ludvigson ( 2001b ) scaled consumption CAPM model, (iii) an external habit SDF proxy, (iv) the classic CAPM, and (v) the classic consumption CAPM. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
This paper introduces a rank-based test for the instrumental variables regression model that dominates the Anderson–Rubin test in terms of finite sample size and asymptotic power in certain circumstances. The test has correct size for any distribution of the errors with weak or strong instruments. The test has noticeably higher power than the Anderson–Rubin test when the error distribution has thick tails and comparable power otherwise. Like the Anderson–Rubin test, the rank tests considered here perform best, relative to other available tests, in exactly identified models.  相似文献   

16.
We propose a unit root test for panels with cross-sectional dependency. We allow general dependency structure among the innovations that generate data for each of the cross-sectional units. Each unit may have different sample size, and therefore unbalanced panels are also permitted in our framework. Yet, the test is asymptotically normal, and does not require any tabulation of the critical values. Our test is based on nonlinear IV estimation of the usual augmented Dickey–Fuller type regression for each cross-sectional unit, using as instruments nonlinear transformations of the lagged levels. The actual test statistic is simply defined as a standardized sum of individual IV t-ratios. We show in the paper that such a standardized sum of individual IV t-ratios has limit normal distribution as long as the panels have large individual time series observations and are asymptotically balanced in a very weak sense. We may have the number of cross-sectional units arbitrarily small or large. In particular, the usual sequential asymptotics, upon which most of the available asymptotic theories for panel unit root models heavily rely, are not required. Finite sample performance of our test is examined via a set of simulations, and compared with those of other commonly used panel unit root tests. Our test generally performs better than the existing tests in terms of both finite sample sizes and powers. We apply our nonlinear IV method to test for the purchasing power parity hypothesis in panels.  相似文献   

17.
In this paper we propose a simulation‐based technique to investigate the finite sample performance of likelihood ratio (LR) tests for the nonlinear restrictions that arise when a class of forward‐looking (FL) models typically used in monetary policy analysis is evaluated with vector autoregressive (VAR) models. We consider ‘one‐shot’ tests to evaluate the FL model under the rational expectations hypothesis and sequences of tests obtained under the adaptive learning hypothesis. The analysis is based on a comparison between the unrestricted and restricted VAR likelihoods, and the p‐values associated with the LR test statistics are computed by Monte Carlo simulation. We also address the case where the variables of the FL model can be approximated as non‐stationary cointegrated processes. Application to the ‘hybrid’ New Keynesian Phillips Curve (NKPC) in the euro area shows that (i) the forward‐looking component of inflation dynamics is much larger than the backward‐looking component and (ii) the sequence of restrictions implied by the cointegrated NKPC under learning dynamics is not rejected over the monitoring period 1984–2005. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Bootstrap‐based methods for bias‐correcting the first‐stage parameter estimates used in some recently developed bootstrap implementations of co‐integration rank tests are investigated. The procedure constructs estimates of the bias in the original parameter estimates by using the average bias in the corresponding parameter estimates taken across a large number of auxiliary bootstrap replications. A number of possible implementations of this procedure are discussed and concrete recommendations made on the basis of finite sample performance evaluated by Monte Carlo simulation methods. The results show that bootstrap‐based bias‐correction methods can significantly improve the small sample performance of the bootstrap co‐integration rank tests.  相似文献   

19.
This paper derives a procedure for simulating continuous non‐normal distributions with specified L‐moments and L‐correlations in the context of power method polynomials of order three. It is demonstrated that the proposed procedure has computational advantages over the traditional product‐moment procedure in terms of solving for intermediate correlations. Simulation results also demonstrate that the proposed L‐moment‐based procedure is an attractive alternative to the traditional procedure when distributions with more severe departures from normality are considered. Specifically, estimates of L‐skew and L‐kurtosis are superior to the conventional estimates of skew and kurtosis in terms of both relative bias and relative standard error. Further, the L‐correlation also demonstrated to be less biased and more stable than the Pearson correlation. It is also shown how the proposed L‐moment‐based procedure can be extended to the larger class of power method distributions associated with polynomials of order five.  相似文献   

20.
With cointegration tests often being oversized under time‐varying error variance, it is possible, if not likely, to confuse error variance non‐stationarity with cointegration. This paper takes an instrumental variable (IV) approach to establish individual‐unit test statistics for no cointegration that are robust to variance non‐stationarity. The sign of a fitted departure from long‐run equilibrium is used as an instrument when estimating an error‐correction model. The resulting IV‐based test is shown to follow a chi‐square limiting null distribution irrespective of the variance pattern of the data‐generating process. In spite of this, the test proposed here has, unlike previous work relying on instrumental variables, competitive local power against sequences of local alternatives in 1/T‐neighbourhoods of the null. The standard limiting null distribution motivates, using the single‐unit tests in a multiple testing approach for cointegration in multi‐country data sets by combining P‐values from individual units. Simulations suggest good performance of the single‐unit and multiple testing procedures under various plausible designs of cross‐sectional correlation and cross‐unit cointegration in the data. An application to the equilibrium relationship between short‐ and long‐term interest rates illustrates the dramatic differences between results of robust and non‐robust tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号