首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ordinary least squares estimation of an impulse‐indicator coefficient is inconsistent, but its variance can be consistently estimated. Although the ratio of the inconsistent estimator to its standard error has a t‐distribution, that test is inconsistent: one solution is to form an index of indicators. We provide Monte Carlo evidence that including a plethora of indicators need not distort model selection, permitting the use of many dummies in a general‐to‐specific framework. Although White's (1980) heteroskedasticity test is incorrectly sized in that context, we suggest an easy alteration. Finally, a possible modification to impulse ‘intercept corrections’ is considered.  相似文献   

2.
This article considers the problem of testing for cross‐section independence in limited dependent variable panel data models. It derives a Lagrangian multiplier (LM) test and shows that in terms of generalized residuals of Gourieroux et al. (1987) it reduces to the LM test of Breusch and Pagan (1980) . Because of the tendency of the LM test to over‐reject in panels with large N (cross‐section dimension), we also consider the application of the cross‐section dependence test (CD) proposed by Pesaran (2004) . In Monte Carlo experiments it emerges that for most combinations of N and T the CD test is correctly sized, whereas the validity of the LM test requires T (time series dimension) to be quite large relative to N. We illustrate the cross‐sectional independence tests with an application to a probit panel data model of roll‐call votes in the US Congress and find that the votes display a significant degree of cross‐section dependence.  相似文献   

3.
The problem of testing non‐nested regression models that include lagged values of the dependent variable as regressors is discussed. It is argued that it is essential to test for error autocorrelation if ordinary least squares and the associated J and F tests are to be used. A heteroskedasticity–robust joint test against a combination of the artificial alternatives used for autocorrelation and non‐nested hypothesis tests is proposed. Monte Carlo results indicate that implementing this joint test using a wild bootstrap method leads to a well‐behaved procedure and gives better control of finite sample significance levels than asymptotic critical values.  相似文献   

4.
We provide an accessible introduction to graph‐theoretic methods for causal analysis. Building on the work of Swanson and Granger (Journal of the American Statistical Association, Vol. 92, pp. 357–367, 1997), and generalizing to a larger class of models, we show how to apply graph‐theoretic methods to selecting the causal order for a structural vector autoregression (SVAR). We evaluate the PC (causal search) algorithm in a Monte Carlo study. The PC algorithm uses tests of conditional independence to select among the possible causal orders – or at least to reduce the admissible causal orders to a narrow equivalence class. Our findings suggest that graph‐theoretic methods may prove to be a useful tool in the analysis of SVARs.  相似文献   

5.
We re‐examine studies of cross‐country growth regressions by Levine and Renelt (American Economic Review, Vol. 82, 1992, pp. 942–963) and Sala‐i‐Martin (American Economic Review, Vol. 87, 1997a, pp. 178–183; Economics Department, Columbia, University, 1997b). In a realistic Monte Carlo experiment, their variants of Edward Leamer's extreme‐bounds analysis are compared with a cross‐sectional version of the general‐to‐specific search methodology associated with the LSE approach to econometrics. Levine and Renelt's method has low size and low power, while Sala‐i‐Martin's method has high size and high power. The general‐to‐specific methodology is shown to have a near nominal size and high power. Sala‐i‐Martin's method and the general‐to‐specific method are then applied to the actual data from Sala‐i‐Martin's original study.  相似文献   

6.
We propose new information criteria for impulse response function matching estimators (IRFMEs). These estimators yield sampling distributions of the structural parameters of dynamic stochastic general equilibrium (DSGE) models by minimizing the distance between sample and theoretical impulse responses. First, we propose an information criterion to select only the responses that produce consistent estimates of the true but unknown structural parameters: the Valid Impulse Response Selection Criterion (VIRSC). The criterion is especially useful for mis-specified models. Second, we propose a criterion to select the impulse responses that are most informative about DSGE model parameters: the Relevant Impulse Response Selection Criterion (RIRSC). These criteria can be used in combination to select the subset of valid impulse response functions with minimal dimension that yields asymptotically efficient estimators. The criteria are general enough to apply to impulse responses estimated by VARs, local projections, and simulation methods. We show that the use of our criteria significantly affects estimates and inference about key parameters of two well-known new Keynesian DSGE models. Monte Carlo evidence indicates that the criteria yield gains in terms of finite sample bias as well as offering tests statistics whose behavior is better approximated by the first order asymptotic theory. Thus, our criteria improve existing methods used to implement IRFMEs.  相似文献   

7.
8.
This paper addresses the concept of multicointegration in a panel data framework and builds upon the panel data cointegration procedures developed in Pedroni [Econometric Theory (2004), Vol. 20, pp. 597–625]. When individuals are either cross‐section independent, or cross‐section dependence can be removed by cross‐section demeaning, our approach can be applied to the wider framework of mixed I(2) and I(1) stochastic processes. The paper also deals with the issue of cross‐section dependence using approximate common‐factor models. Finite sample performance is investigated through Monte Carlo simulations. Finally, we illustrate the use of the procedure investigating an inventories, sales and production relationship for a panel of US industries.  相似文献   

9.
This paper proposes two new panel unit root tests based on Zaykin et al. (2002) ’s truncated product method. The first one assumes constant correlation between P‐values and the second one uses sieve bootstrap to allow for general forms of cross‐section dependence in the panel units. Monte Carlo simulation shows that both tests have reasonably good size and are powerful in cases of some very large P‐values. The proposed tests are applied to a panel of real GDP and inflation density forecasts, resulting in evidence that professional forecasters may not update their forecast precision in an optimal Bayesian way.  相似文献   

10.
We analyze the asymptotic distributions associated with the seasonal unit root tests of Hylleberg et al. (1990) for quarterly data when the innovations follow a moving average process. Although both the t‐ and F‐type tests suffer from scale and shift effects compared with the presumed null distributions when a fixed order of autoregressive augmentation is applied, these effects disappear when the order of augmentation is sufficiently large. However, as found by Burridge and Taylor (2001) for the autoregressive case, individual t‐ratio tests at the semi‐annual frequency are not pivotal even with high orders of augmentation, although the corresponding joint F‐type statistic is pivotal. Monte Carlo simulations verify the importance of the order of augmentation for finite samples generated by seasonally integrated moving average processes.  相似文献   

11.
In this paper, we use Monte Carlo (MC) testing techniques for testing linearity against smooth transition models. The MC approach allows us to introduce a new test that differs in two respects from the tests existing in the literature. First, the test is exact in the sense that the probability of rejecting the null when it is true is always less than or equal to the nominal size of the test. Secondly, the test is not based on an auxiliary regression obtained by replacing the model under the alternative by approximations based on a Taylor expansion. We also apply MC testing methods for size correcting the test proposed by Luukkonen, Saikkonen and Teräsvirta (Biometrika, Vol. 75, 1988, p. 491). The results show that the power loss implied by the auxiliary regression‐based test is non‐existent compared with a supremum‐based test but is more substantial when compared with the three other tests under consideration.  相似文献   

12.
This paper deals with the finite‐sample performance of a set of unit‐root tests for cross‐correlated panels. Most of the available macroeconomic time series cover short time periods. The lack of information, in terms of time observations, implies that univariate tests are not powerful enough to reject the null of a unit‐root while panel tests, by exploiting the large number of cross‐sectional units, have been shown to be a promising way of increasing the power of unit‐root tests. We investigate the finite sample properties of recently proposed panel unit‐root tests for cross‐sectionally correlated panels. Specifically, the size and power of Choi's [Econometric Theory and Practice: Frontiers of Analysis and Applied Research: Essays in Honor of Peter C. B. Phillips, Cambridge University Press, Cambridge (2001)], Bai and Ng's [Econometrica (2004), Vol. 72, p. 1127], Moon and Perron's [Journal of Econometrics (2004), Vol. 122, p. 81], and Phillips and Sul's [Econometrics Journal (2003), Vol. 6, p. 217] tests are analysed by a Monte Carlo simulation study. In synthesis, Moon and Perron's tests show good size and power for different values of T and N, and model specifications. Focusing on Bai and Ng's procedure, the simulation study highlights that the pooled Dickey–Fuller generalized least squares test provides higher power than the pooled augmented Dickey–Fuller test for the analysis of non‐stationary properties of the idiosyncratic components. Choi's tests are strongly oversized when the common factor influences the cross‐sectional units heterogeneously.  相似文献   

13.
In this paper, we propose several finite‐sample specification tests for multivariate linear regressions (MLR). We focus on tests for serial dependence and ARCH effects with possibly non‐Gaussian errors. The tests are based on properly standardized multivariate residuals to ensure invariance to error covariances. The procedures proposed provide: (i) exact variants of standard multivariate portmanteau tests for serial correlation as well as ARCH effects, and (ii) exact versions of the diagnostics presented by Shanken ( 1990 ) which are based on combining univariate specification tests. Specifically, we combine tests across equations using a Monte Carlo (MC) test method so that Bonferroni‐type bounds can be avoided. The procedures considered are evaluated in a simulation experiment: the latter shows that standard asymptotic procedures suffer from serious size problems, while the MC tests suggested display excellent size and power properties, even when the sample size is small relative to the number of equations, with normal or Student‐t errors. The tests proposed are applied to the Fama–French three‐factor model. Our findings suggest that the i.i.d. error assumption provides an acceptable working framework once we allow for non‐Gaussian errors within 5‐year sub‐periods, whereas temporal instabilities clearly plague the full‐sample dataset. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
This paper extends the cross‐sectionally augmented panel unit‐root test (CIPS) developed by Pesaran et al. (2013, Journal of Econometrics, Vol. 175, pp. 94–115) to allow for smoothing structural changes in deterministic terms modelled by a Fourier function. The proposed statistic is called the break augmented CIPS (BCIPS) statistic. We show that the non‐standard limiting distribution of the (truncated) BCIPS statistic exists and tabulate its critical values. Monte‐Carlo experiments point out that the sizes and powers of the BCIPS statistic are generally satisfactory as long as the number of time periods, T, is not less than fifty. The BCIPS test is then applied to examine the validity of long‐run purchasing power parity.  相似文献   

15.
Empirical count data are often zero‐inflated and overdispersed. Currently, there is no software package that allows adequate imputation of these data. We present multiple‐imputation routines for these kinds of count data based on a Bayesian regression approach or alternatively based on a bootstrap approach that work as add‐ons for the popular multiple imputation by chained equations (mice ) software in R (van Buuren and Groothuis‐Oudshoorn , Journal of Statistical Software, vol. 45, 2011, p. 1). We demonstrate in a Monte Carlo simulation that our procedures are superior to currently available count data procedures. It is emphasized that thorough modeling is essential to obtain plausible imputations and that model mis‐specifications can bias parameter estimates and standard errors quite noticeably. Finally, the strengths and limitations of our procedures are discussed, and fruitful avenues for future theory and software development are outlined.  相似文献   

16.
Feenstra and Hanson [NBER Working Paper No. 6052 (1997)] propose a procedure to correct the standard errors in a two‐stage regression with generated dependent variables. Their method has subsequently been used in two‐stage mandated wage models [Feenstra and Hanson, Quarterly Journal of Economics (1999) Vol. 114, pp. 907–940; Haskel and Slaughter, The Economic Journal (2001) Vol. 111, pp. 163–187; Review of International Economics (2003) Vol. 11, pp. 630–650] and for the estimation of the sector bias of skill‐biased technological change [Haskel and Slaughter, European Economic Review (2002) Vol. 46, pp. 1757–1783]. Unfortunately, the proposed correction is negatively biased (sometimes even resulting in negative estimated variances) and therefore leads to overestimation of the inferred significance. We present an unbiased correction procedure and apply it to the models reported by Feenstra and Hanson (1999) and Haskel and Slaughter (2002) .  相似文献   

17.
This paper considers estimation and inference in linear panel regression models with lagged dependent variables and/or other weakly exogenous regressors when N (the cross‐section dimension) is large relative to T (the time series dimension). It allows for fixed and time effects (FE‐TE) and derives a general formula for the bias of the FE‐TE estimator which generalizes the well‐known Nickell bias formula derived for the pure autoregressive dynamic panel data models. It shows that in the presence of weakly exogenous regressors inference based on the FE‐TE estimator will result in size distortions unless N/T is sufficiently small. To deal with the bias and size distortion of the FE‐TE estimator the use of a half‐panel jackknife FE‐TE estimator is considered and its asymptotic distribution is derived. It is shown that the bias of the half‐panel jackknife FE‐TE estimator is of order T?2, and for valid inference it is only required that N/T3→0, as N,T jointly. Extension to unbalanced panel data models is also provided. The theoretical results are illustrated with Monte Carlo evidence. It is shown that the FE‐TE estimator can suffer from large size distortions when N>T, with the half‐panel jackknife FE‐TE estimator showing little size distortions. The use of half‐panel jackknife FE‐TE estimator is illustrated with two empirical applications from the literature.  相似文献   

18.
Graph‐theoretic methods of causal search based on the ideas of Pearl (2000), Spirtes et al. (2000), and others have been applied by a number of researchers to economic data, particularly by Swanson and Granger (1997) to the problem of finding a data‐based contemporaneous causal order for the structural vector autoregression, rather than, as is typically done, assuming a weakly justified Choleski order. Demiralp and Hoover (2003) provided Monte Carlo evidence that such methods were effective, provided that signal strengths were sufficiently high. Unfortunately, in applications to actual data, such Monte Carlo simulations are of limited value, as the causal structure of the true data‐generating process is necessarily unknown. In this paper, we present a bootstrap procedure that can be applied to actual data (i.e. without knowledge of the true causal structure). We show with an applied example and a simulation study that the procedure is an effective tool for assessing our confidence in causal orders identified by graph‐theoretic search algorithms.  相似文献   

19.
We suggest a strategy to evaluate members of a class of New‐Keynesian models of a small open economy. As an example, we estimate a modified version of the model in Svensson [Journal of International Economics (2000) Vol. 50, pp. 155–183] and compare its impulse response and variance decomposition functions with those a structural vector autoregression (VAR) model. The focus is on responses to foreign rather than to domestic shocks, which facilitates identification. Some results are that US shocks account for large shares of the variance of Canadian variables, that little of this influence is due to real exchange rate movements, and that Canadian monetary policy is not adequately described by a Taylor rule.  相似文献   

20.
In this article, we investigate the behaviour of a number of methods for estimating the co‐integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) , with the commonly used sequential approach of Johansen [Likelihood‐based Inference in Cointegrated Vector Autoregressive Models (1996)] based around the use of either asymptotic or wild bootstrap‐based likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finite‐sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC‐based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct co‐integration rank across different values of the co‐integration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these, the wild bootstrap procedure is perhaps the more reliable overall as it avoids a significant tendency seen in the BIC‐based method to over‐estimate the co‐integration rank in relatively small sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号