首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 926 毫秒
1.
In this paper, we consider GMM estimation of the regression and MRSAR models with SAR disturbances. We derive the best GMM estimator within the class of GMM estimators based on linear and quadratic moment conditions. The best GMM estimator has the merit of computational simplicity and asymptotic efficiency. It is asymptotically as efficient as the ML estimator under normality and asymptotically more efficient than the Gaussian QML estimator otherwise. Monte Carlo studies show that, with moderate-sized samples, the best GMM estimator has its biggest advantage when the disturbances are asymmetrically distributed. When the diagonal elements of the spatial weights matrix have enough variation, incorporating kurtosis of the disturbances in the moment functions will also be helpful.  相似文献   

2.
This paper studies a time-varying coefficient time series model with a time trend function and serially correlated errors to characterize the nonlinearity, nonstationarity, and trending phenomenon. A local linear approach is developed to estimate the time trend and coefficient functions. The asymptotic properties of the proposed estimators, coupled with their comparisons with other methods, are established under the αα-mixing conditions and without specifying the error distribution. Further, the asymptotic behaviors of the estimators at the boundaries are examined. The practical problem of implementation is also addressed. In particular, a simple nonparametric version of a bootstrap test is adapted for testing misspecification and stationarity, together with a data-driven method for selecting the bandwidth and a consistent estimate of the standard errors. Finally, results of two Monte Carlo experiments are presented to examine the finite sample performances of the proposed procedures and an empirical example is discussed.  相似文献   

3.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

4.
Maximum Likelihood (ML) estimation of probit models with correlated errors typically requires high-dimensional truncated integration. Prominent examples of such models are multinomial probit models and binomial panel probit models with serially correlated errors. In this paper we propose to use a generic procedure known as Efficient Importance Sampling (EIS) for the evaluation of likelihood functions for probit models with correlated errors. Our proposed EIS algorithm covers the standard GHK probability simulator as a special case. We perform a set of Monte Carlo experiments in order to illustrate the relative performance of both procedures for the estimation of a multinomial multiperiod probit model. Our results indicate substantial numerical efficiency gains for ML estimates based on the GHK–EIS procedure relative to those obtained by using the GHK procedure.  相似文献   

5.
The central concern of this paper is the provision in a time series moment condition framework of practical recommendations of confidence regions for parameters whose coverage probabilities are robust to the strength or weakness of identification. To this end we develop Pearson-type test statistics based on GEL implied probabilities formed from general kernel smoothed versions of the moment indicators. We also modify the statistics suggested in Guggenberger and Smith (2008) for a general kernel smoothing function. Importantly for our conclusions, we provide GEL time series counterparts to GMM and GEL conditional likelihood ratio statistics given in Kleibergen (2005) and Smith (2007). Our analysis not only demonstrates that these statistics are asymptotically (conditionally) pivotal under both classical asymptotic theory and weak instrument asymptotics of Stock and Wright (2000) but also provides asymptotic power results in the weakly identified time series context. Consequently, the empirical null rejection probabilities of the associated tests and, thereby, the coverage probabilities of the corresponding confidence regions, should not be affected greatly by the strength or otherwise of identification. A comprehensive Monte Carlo study indicates that a number of the tests proposed here represent very competitive choices in comparison with those suggested elsewhere in the literature.  相似文献   

6.
We introduce test statistics based on generalized empirical likelihood methods that can be used to test simple hypotheses involving the unknown parameter vector in moment condition time series models. The test statistics generalize those in Guggenberger and Smith [2005. Generalized empirical likelihood estimators and tests under partial, weak and strong identification. Econometric Theory 21 (4), 667–709] from the i.i.d. to the time series context and are alternatives to those in Kleibergen [2005a. Testing parameters in GMM without assuming that they are identified. Econometrica 73 (4), 1103–1123] and Otsu [2006. Generalized empirical likelihood inference for nonlinear and time series models under weak identification. Econometric Theory 22 (3), 513–527]. The main feature of these tests is that their empirical null rejection probabilities are not affected much by the strength or weakness of identification. More precisely, we show that the statistics are asymptotically distributed as chi-square under both classical asymptotic theory and weak instrument asymptotics of Stock and Wright [2000. GMM with weak identification. Econometrica 68 (5), 1055–1096]. We also introduce a modification to Otsu's (2006) statistic that is computationally more attractive. A Monte Carlo study reveals that the finite-sample performance of the suggested tests is very competitive.  相似文献   

7.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

8.
This paper addresses the issue of optimal inference for parameters that are partially identified in models with moment inequalities. There currently exists a variety of inferential methods for use in this setting. However, the question of choosing optimally among contending procedures is unresolved. In this paper, I first consider a canonical large deviations criterion for optimality and show that inference based on the empirical likelihood ratio statistic is optimal. Second, I introduce a new empirical likelihood bootstrap that provides a valid resampling method for moment inequality models and overcomes the implementation challenges that arise as a result of non-pivotal limit distributions. Lastly, I analyze the finite sample properties of the proposed framework using Monte Carlo simulations. The simulation results are encouraging.  相似文献   

9.
There is a need for tests that are derived from the ordinary least squares (OLS) estimators of regression coefficients and are useful in the presence of unspecified forms of heteroskedasticity and autocorrelation. A method that uses the moving block bootstrap and quasi‐estimators in order to derive a consistent estimator of the asymptotic covariance matrix for the OLS estimators and robust significance tests is proposed. The method is shown to be asymptotically valid and Monte Carlo evidence indicates that it is capable of providing good control of significance levels in finite samples and good power compared with two other bootstrap tests.  相似文献   

10.
This paper proposes a novel procedure to estimate linear models when the number of instruments is large. At the heart of such models is the need to balance the trade off between attaining asymptotic efficiency, which requires more instruments, and minimizing bias, which is adversely affected by the addition of instruments. Two questions are of central concern: (1) What is the optimal number of instruments to use? (2) Should the instruments receive different weights? This paper contains the following contributions toward resolving these issues. First, I propose a kernel weighted generalized method of moments (GMM) estimator that uses a trapezoidal kernel. This kernel turns out to be attractive to select and weight the number of moments. Second, I derive the higher order mean squared error of the kernel weighted GMM estimator and show that the trapezoidal kernel generates a lower asymptotic variance than regular kernels. Finally, Monte Carlo simulations show that in finite samples the kernel weighted GMM estimator performs on par with other estimators that choose optimal instruments and improves upon a GMM estimator that uses all instruments.  相似文献   

11.
Even though recent Monte Carlo evidence has shown that the use of bootstrap critical values, instead of asymptotic ones, improves the size of the tests substantially, empirical applications using GMM bootstrap techniques are largely missing. In this paper, the dynamic relationship between local government revenues and expenditures is re‐investigated using GMM bootstrapping techniques on a panel of 265 Swedish municipalities over the period 1979–1987. A lag of one year is found in the expenditures equation, while no dynamics is found in the own‐source revenues and grants equations. These results, while contrasting sharply with those obtained when asymptotic critical values are used, are well in line with the theoretical explanations given in the literature for dynamic behaviour in the local public sector. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

12.
This paper addresses the problem of estimation of a nonparametric regression function from selectively observed data when selection is endogenous. Our approach relies on independence between covariates and selection conditionally on potential outcomes. Endogeneity of regressors is also allowed for. In the exogenous and endogenous case, consistent two-step estimation procedures are proposed and their rates of convergence are derived. Pointwise asymptotic distribution of the estimators is established. In addition, bootstrap uniform confidence bands are obtained. Finite sample properties are illustrated in a Monte Carlo simulation study and an empirical illustration.  相似文献   

13.
Maximum likelihood (ML) estimation of the autoregressive parameter of a dynamic panel data model with fixed effects is inconsistent under fixed time series sample size and large cross section sample size asymptotics. This paper proposes a general, computationally inexpensive method of bias reduction that is based on indirect inference, shows unbiasedness and analyzes efficiency. Monte Carlo studies show that our procedure achieves substantial bias reductions with only mild increases in variance, thereby substantially reducing root mean square errors. The method is compared with certain consistent estimators and is shown to have superior finite sample properties to the generalized method of moment (GMM) and the bias-corrected ML estimator.  相似文献   

14.
Monte Carlo studies have shown that estimated asymptotic standard errors of the efficient two-step generalized method of moments (GMM) estimator can be severely downward biased in small samples. The weight matrix used in the calculation of the efficient two-step GMM estimator is based on initial consistent parameter estimates. In this paper it is shown that the extra variation due to the presence of these estimated parameters in the weight matrix accounts for much of the difference between the finite sample and the usual asymptotic variance of the two-step GMM estimator, when the moment conditions used are linear in the parameters. This difference can be estimated, resulting in a finite sample corrected estimate of the variance. In a Monte Carlo study of a panel data model it is shown that the corrected variance estimate approximates the finite sample variance well, leading to more accurate inference.  相似文献   

15.
This paper analyzes the higher-order asymptotic properties of generalized method of moments (GMM) estimators for linear time series models using many lags as instruments. A data-dependent moment selection method based on minimizing the approximate mean squared error is developed. In addition, a new version of the GMM estimator based on kernel-weighted moment conditions is proposed. It is shown that kernel-weighted GMM estimators can reduce the asymptotic bias compared to standard GMM estimators. Kernel weighting also helps to simplify the problem of selecting the optimal number of instruments. A feasible procedure similar to optimal bandwidth selection is proposed for the kernel-weighted GMM estimator.  相似文献   

16.
We consider pseudo-panel data models constructed from repeated cross sections in which the number of individuals per group is large relative to the number of groups and time periods. First, we show that, when time-invariant group fixed effects are neglected, the OLS estimator does not converge in probability to a constant but rather to a random variable. Second, we show that, while the fixed-effects (FE) estimator is consistent, the usual t statistic is not asymptotically normally distributed, and we propose a new robust t statistic whose asymptotic distribution is standard normal. Third, we propose efficient GMM estimators using the orthogonality conditions implied by grouping and we provide t tests that are valid even in the presence of time-invariant group effects. Our Monte Carlo results show that the proposed GMM estimator is more precise than the FE estimator and that our new t test has good size and is powerful.  相似文献   

17.
This paper considers the generalized empirical likelihood (GEL) method for estimating the parameters of the multivariate stable distribution. The GEL method is considered to be an extension of the generalized method of moments (GMM). The multivariate stable distributions are widely applicable as they can accommodate both skewness and heavy tails. We treat the spectral measure, which summarizes scale and asymmetry, by discretization. In order to estimate all the model parameters simultaneously, we apply the estimating function constructed by equating empirical and theoretical characteristic functions. The efficacy of the proposed GEL method is demonstrated in Monte Carlo studies. An illustrative example involving daily returns of market indexes is also included.  相似文献   

18.
The problem of testing non‐nested regression models that include lagged values of the dependent variable as regressors is discussed. It is argued that it is essential to test for error autocorrelation if ordinary least squares and the associated J and F tests are to be used. A heteroskedasticity–robust joint test against a combination of the artificial alternatives used for autocorrelation and non‐nested hypothesis tests is proposed. Monte Carlo results indicate that implementing this joint test using a wild bootstrap method leads to a well‐behaved procedure and gives better control of finite sample significance levels than asymptotic critical values.  相似文献   

19.
We propose a parametric block wild bootstrap approach to compute density forecasts for various types of mixed‐data sampling (MIDAS) regressions. First, Monte Carlo simulations show that predictive densities for the various MIDAS models derived from the block wild bootstrap approach are more accurate in terms of coverage rates than predictive densities derived from either a residual‐based bootstrap approach or by drawing errors from a normal distribution. This result holds whether the data‐generating errors are normally independently distributed, serially correlated, heteroskedastic or a mixture of normal distributions. Second, we evaluate density forecasts for quarterly US real output growth in an empirical exercise, exploiting information from typical monthly and weekly series. We show that the block wild bootstrapping approach, applied to the various MIDAS regressions, produces predictive densities for US real output growth that are well calibrated. Moreover, relative accuracy, measured in terms of the logarithmic score, improves for the various MIDAS specifications as more information becomes available. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
This paper derives limit distributions of empirical likelihood estimators for models in which inequality moment conditions provide overidentifying information. We show that the use of this information leads to a reduction of the asymptotic mean-squared estimation error and propose asymptotically uniformly valid tests and confidence sets for the parameters of interest. While inequality moment conditions arise in many important economic models, we use a dynamic macroeconomic model as a data generating process and illustrate our methods with instrumental variable estimators of monetary policy rules. The results obtained in this paper extend to conventional GMM estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号