首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 245 毫秒
1.
The cluster robust variance estimator (CRVE) relies on the number of clusters being sufficiently large. Monte Carlo evidence suggests that the ‘rule of 42’ is not true for unbalanced clusters. Rejection frequencies are higher for datasets with 50 clusters proportional to US state populations than with 50 balanced clusters. Using critical values based on the wild cluster bootstrap performs much better. However, this procedure fails when a small number of clusters is treated. We explain why CRVE t statistics and the wild bootstrap fail in this case, study the ‘effective number’ of clusters and simulate placebo laws with dummy variable regressors. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
We present finite sample evidence on different IV estimators available for linear models under weak instruments; explore the application of the bootstrap as a bias reduction technique to attenuate their finite sample bias; and employ three empirical applications to illustrate and provide insights into the relative performance of the estimators in practice. Our evidence indicates that the random‐effects quasi‐maximum likelihood estimator outperforms alternative estimators in terms of median point estimates and coverage rates, followed by the bootstrap bias‐corrected version of LIML and LIML. However, our results also confirm the difficulty of obtaining reliable point estimates in models with weak identification and moderate‐size samples. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
The wild bootstrap is studied in the context of regression models with heteroskedastic disturbances. We show that, in one very specific case, perfect bootstrap inference is possible, and a substantial reduction in the error in the rejection probability of a bootstrap test is available much more generally. However, the version of the wild bootstrap with this desirable property is without the skewness correction afforded by the currently most popular version of the wild bootstrap. Simulation experiments show that this does not prevent the preferred version from having the smallest error in rejection probability in small and medium-sized samples.  相似文献   

4.
The paper investigates the usefulness of bootstrap methods for small sample inference in cointegrating regression models. It discusses the standard bootstrap, the recursive bootstrap, the moving block bootstrap and the stationary bootstrap methods. Some guidelines for bootstrap data generation and test statistics to consider are provided and some simulation evidence presented suggests that the bootstrap methods, when properly implemented, can provide significant improvement over asymptotic inference.  相似文献   

5.
Robust methods for instrumental variable inference have received considerable attention recently. Their analysis has raised a variety of problematic issues such as size/power trade‐offs resulting from weak or many instruments. We show that information reduction methods provide a useful and practical solution to this and related problems. Formally, we propose factor‐based modifications to three popular weak‐instrument‐robust statistics, and illustrate their validity asymptotically and in finite samples. Results are derived using asymptotic settings that are commonly used in both the factor and weak‐instrument literature. For the Anderson–Rubin statistic, we also provide analytical finite‐sample results that do not require any underlying factor structure. An illustrative Monte Carlo study reveals the following. Factor‐based tests control size regardless of instruments and factor quality. All factor‐based tests are systematically more powerful than standard counterparts. With informative instruments and in contrast to standard tests: (i) power of factor‐based tests is not affected by k even when large; and (ii) weak factor structure does not cost power. An empirical study on a New Keynesian macroeconomic model suggests that our factor‐based methods can bridge a number of gaps between structural and statistical modeling. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
This article studies inference of multivariate trend model when the volatility process is nonstationary. Within a quite general framework we analyze four classes of tests based on least squares estimation, one of which is robust to both weak serial correlation and nonstationary volatility. The existing multivariate trend tests, which either use non-robust standard errors or rely on non-standard distribution theory, are generally non-pivotal involving the unknown time-varying volatility function in the limit. Two-step residual-based i.i.d. bootstrap and wild bootstrap procedures are proposed for the robust tests and are shown to be asymptotically valid. Simulations demonstrate the effects of nonstationary volatility on the trend tests and the good behavior of the robust tests in finite samples.  相似文献   

7.
Instrumental variable (IV) methods for regression are well established. More recently, methods have been developed for statistical inference when the instruments are weakly correlated with the endogenous regressor, so that estimators are biased and no longer asymptotically normally distributed. This paper extends such inference to the case where two separate samples are used to implement instrumental variables estimation. We also relax the restrictive assumptions of homoskedastic error structure and equal moments of exogenous covariates across two samples commonly employed in the two‐sample IV literature for strong IV inference. Monte Carlo experiments show good size properties of the proposed tests regardless of the strength of the instruments. We apply the proposed methods to two seminal empirical studies that adopt the two‐sample IV framework.  相似文献   

8.
In this paper, we propose a fixed design wild bootstrap procedure to test parameter restrictions in vector autoregressive models, which is robust in cases of conditionally heteroskedastic error terms. The wild bootstrap does not require any parametric specification of the volatility process and takes contemporaneous error correlation implicitly into account. Via a Monte Carlo investigation, empirical size and power properties of the method are illustrated for the case of white noise under the null hypothesis. We compare the bootstrap approach with standard ordinary least squares (OLS)-based, weighted least squares (WLS) and quasi-maximum likelihood (QML) approaches. In terms of empirical size, the proposed method outperforms competing approaches and achieves size-adjusted power close to WLS or QML inference. A White correction of standard OLS inference is satisfactory only in large samples. We investigate the case of Granger causality in a bivariate system of inflation expectations in France and the United Kingdom. Our evidence suggests that the former are Granger causal for the latter while for the reverse relation Granger non-causality cannot be rejected.  相似文献   

9.
This paper presents an inference approach for dependent data in time series, spatial, and panel data applications. The method involves constructing t and Wald statistics using a cluster covariance matrix estimator (CCE). We use an approximation that takes the number of clusters/groups as fixed and the number of observations per group to be large. The resulting limiting distributions of the t and Wald statistics are standard t and F distributions where the number of groups plays the role of sample size. Using a small number of groups is analogous to ‘fixed-b’ asymptotics of [Kiefer and Vogelsang, 2002] and [Kiefer and Vogelsang, 2005] (KV) for heteroskedasticity and autocorrelation consistent inference. We provide simulation evidence that demonstrates that the procedure substantially outperforms conventional inference procedures.  相似文献   

10.
Through Monte Carlo experiments the effects of a feedback mechanism on the accuracy in finite samples of ordinary and bootstrap inference procedures are examined in stable first- and second-order autoregressive distributed-lag models with non-stationary weakly exogenous regressors. The Monte Carlo is designed to mimic situations that are relevant when a weakly exogenous policy variable affects (and is affected by) the outcome of agents’ behaviour. In the parameterizations we consider, it is found that small-sample problems undermine ordinary first-order asymptotic inference procedures irrespective of the presence and importance of a feedback mechanism. We examine several residual-based bootstrap procedures, each of them designed to reduce one or several specific types of bootstrap approximation error. Surprisingly, the bootstrap procedure which only incorporates the conditional model overcomes the small sample problems reasonably well. Often (but not always) better results are obtained if the bootstrap also resamples the marginal model for the policymakers’ behaviour.  相似文献   

11.
In this article, we investigate the behaviour of a number of methods for estimating the co‐integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) , with the commonly used sequential approach of Johansen [Likelihood‐based Inference in Cointegrated Vector Autoregressive Models (1996)] based around the use of either asymptotic or wild bootstrap‐based likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finite‐sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC‐based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct co‐integration rank across different values of the co‐integration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these, the wild bootstrap procedure is perhaps the more reliable overall as it avoids a significant tendency seen in the BIC‐based method to over‐estimate the co‐integration rank in relatively small sample sizes.  相似文献   

12.
We consider the following problem. There is a structural equation of interest that contains an explanatory variable that theory predicts is endogenous. There are one or more instrumental variables that credibly are exogenous with regard to this structural equation, but which have limited explanatory power for the endogenous variable. Further, there is one or more potentially ‘strong’ instruments, which has much more explanatory power but which may not be exogenous. Hausman (1978) provided a test for the exogeneity of the second instrument when none of the instruments are weak. Here, we focus on how the standard Hausman test does in the presence of weak instruments using the Staiger–Stock asymptotics. It is natural to conjecture that the standard version of the Hausman test would be invalid in the weak instrument case, which we confirm. However, we provide a version of the Hausman test that is valid even in the presence of weak IV and illustrate how to implement the test in the presence of heteroskedasticity. We show that the situation we analyze occurs in several important economic examples. Our Monte Carlo experiments show that our procedure works relatively well in finite samples. We should note that our test is not consistent, although we believe that it is impossible to construct a consistent test with weak instruments.  相似文献   

13.
A new procedure is proposed for modelling nonlinearity of a smooth transition form, by allowing the transition variable to be a weighted function of lagged observations. This function depends on two unknown parameters and requires specification of the maximum lag only. Nonlinearity testing for this specification uses a search over a plausible set of weight function parameters, combined with bootstrap inference. Finite‐sample results show that the recommended wild bootstrap heteroskedasticity‐robust testing procedure performs well, for both homoskedastic and heteroskedastic data‐generating processes. Forecast comparisons relative to linear models and other nonlinear specifications of the smooth transition form confirm that the new WSTR model delivers good performance. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
《Journal of econometrics》2005,128(1):165-193
We analyze OLS-based tests of long-run relationships, weak exogeneity and short-run dynamics in conditional error correction models. Unweighted sums of single equation test statistics are used for hypothesis testing in pooled systems. When model errors are (conditionally) heteroskedastic tests of weak exogeneity and short run dynamics are affected by nuisance parameters. Similarly, on the pooled level the advocated test statistics are no longer pivotal in presence of cross-sectional error correlation. We prove that the wild bootstrap provides asymptotically valid critical values under both conditional heteroskedasticity and cross-sectional error correlation. A Monte-Carlo study reveals that in small samples the bootstrap outperforms first-order asymptotic approximations in terms of the empirical size even if the asymptotic distribution of the test statistic does not depend on nuisance parameters. Opposite to feasible GLS methods the approach does not require any estimate of cross-sectional correlation and copes with time-varying patterns of contemporaneous error correlation.  相似文献   

15.
Dynamics of Inventor Networks and the Evolution of Technology Clusters   总被引:1,自引:0,他引:1  
Clusters are important drivers of regional economic growth. Although their benefits are well recognized, research into their evolution is still ongoing. Most real‐world clusters seem to have emerged spontaneously without deliberate policy interventions, each cluster having its own evolutionary path. Since there is a significant gap in our understanding of the forces driving their evolution, this study uses a quantitative approach to investigate the role of inventor collaboration networks in it. Inventor collaboration networks for 30 top‐performing American metropolitan clusters were constructed on the basis of patent co‐authorship data. The selected clusters operate in hi‐tech fields: information technology, communications equipment and the biopharmaceutical industry. Starting from a widely accepted hypothesis that the ‘small‐world’ structure is an optimal one for knowledge spillovers and promotes innovation effectively, the authors statistically tested the impact of ‘small‐world’ network properties on cluster innovation performance proxied by patent output. The results suggest that the effect of the small‐world structure is not as significant as theorists hypothesized, not all clusters benefit from the presence of inventor collaboration networks, and cluster performance can be affected by policy interventions. Our analyses also suggest that cluster typology moderates the impact of inventor network properties on cluster innovation performance.  相似文献   

16.
There has been considerable and controversial research over the past two decades into how successfully random effects misspecification in mixed models (i.e. assuming normality for the random effects when the true distribution is non‐normal) can be diagnosed and what its impacts are on estimation and inference. However, much of this research has focused on fixed effects inference in generalised linear mixed models. In this article, motivated by the increasing number of applications of mixed models where interest is on the variance components, we study the effects of random effects misspecification on random effects inference in linear mixed models, for which there is considerably less literature. Our findings are surprising and contrary to general belief: for point estimation, maximum likelihood estimation of the variance components under misspecification is consistent, although in finite samples, both the bias and mean squared error can be substantial. For inference, we show through theory and simulation that under misspecification, standard likelihood ratio tests of truly non‐zero variance components can suffer from severely inflated type I errors, and confidence intervals for the variance components can exhibit considerable under coverage. Furthermore, neither of these problems vanish asymptotically with increasing the number of clusters or cluster size. These results have major implications for random effects inference, especially if the true random effects distribution is heavier tailed than the normal. Fortunately, simple graphical and goodness‐of‐fit measures of the random effects predictions appear to have reasonable power at detecting misspecification. We apply linear mixed models to a survey of more than 4 000 high school students within 100 schools and analyse how mathematics achievement scores vary with student attributes and across different schools. The application demonstrates the sensitivity of mixed model inference to the true but unknown random effects distribution.  相似文献   

17.
This paper focuses on the estimation of a finite dimensional parameter in a linear model where the number of instruments is very large or infinite. In order to improve the small sample properties of standard instrumental variable (IV) estimators, we propose three modified IV estimators based on three different ways of inverting the covariance matrix of the instruments. These inverses involve a regularization or smoothing parameter. It should be stressed that no restriction on the number of instruments is needed and that all the instruments are used in the estimation. We show that the three estimators are asymptotically normal and attain the semiparametric efficiency bound. Higher-order analysis of the MSE reveals that the bias of the modified estimators does not depend on the number of instruments. Finally, we suggest a data-driven method for selecting the regularization parameter. Interestingly, our regularization techniques lead to a consistent nonparametric estimation of the optimal instrument.  相似文献   

18.
We consider the impact of a break in the innovation volatility process on ratio‐based persistence change tests. We demonstrate that the ratio statistics used do not have pivotal limiting null distributions and that the associated tests display a considerable degree of size distortion with size approaching unity in some cases. In practice, therefore, on the basis of these tests the practitioner will face difficulty in discriminating between persistence change processes and processes which display a simple volatility break. A wild bootstrap‐based solution to the identified inference problem is proposed and is shown to work well in practice.  相似文献   

19.
Testing for Linearity   总被引:5,自引:0,他引:5  
The problem of testing for linearity and the number of regimes in the context of self‐exciting threshold autoregressive (SETAR) models is reviewed. We describe least‐squares methods of estimation and inference. The primary complication is that the testing problem is non‐standard, due to the presence of parameters which are only defined under the alternative, so the asymptotic distribution of the test statistics is non‐standard. Simulation methods to calculate asymptotic and bootstrap distributions are presented. As the sampling distributions are quite sensitive to conditional heteroskedasticity in the error, careful modeling of the conditional variance is necessary for accurate inference on the conditional mean. We illustrate these methods with two applications — annual sunspot means and monthly U.S. industrial production. We find that annual sunspots and monthly industrial production are SETAR(2) processes.  相似文献   

20.
This paper is a study of the application of Bayesian exponentially tilted empirical likelihood to inference about quantile regressions. In the case of simple quantiles we show the exact form for the likelihood implied by this method and compare it with the Bayesian bootstrap and with Jeffreys' method. For regression quantiles we derive the asymptotic form of the posterior density. We also examine Markov chain Monte Carlo simulations with a proposal density formed from an overdispersed version of the limiting normal density. We show that the algorithm works well even in models with an endogenous regressor when the instruments are not too weak. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号