首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider pseudo-panel data models constructed from repeated cross sections in which the number of individuals per group is large relative to the number of groups and time periods. First, we show that, when time-invariant group fixed effects are neglected, the OLS estimator does not converge in probability to a constant but rather to a random variable. Second, we show that, while the fixed-effects (FE) estimator is consistent, the usual t statistic is not asymptotically normally distributed, and we propose a new robust t statistic whose asymptotic distribution is standard normal. Third, we propose efficient GMM estimators using the orthogonality conditions implied by grouping and we provide t tests that are valid even in the presence of time-invariant group effects. Our Monte Carlo results show that the proposed GMM estimator is more precise than the FE estimator and that our new t test has good size and is powerful.  相似文献   

2.
This paper studies robust inference for linear panel models with fixed effects in the presence of heteroskedasticity and spatiotemporal dependence of unknown forms. We propose a bivariate kernel covariance estimator that nests existing estimators as special cases. Our estimator improves upon existing estimators in terms of robustness, efficiency, and adaptiveness. For distributional approximations, we considered two types of asymptotics: the increasing-smoothing asymptotics and the fixed-smoothing asymptotics. Under the former asymptotics, the Wald statistic based on our covariance estimator converges to a chi-square distribution. Under the latter asymptotics, the Wald statistic is asymptotically equivalent to a distribution that can be well approximated by an F distribution. Simulation results show that our proposed testing procedure works well in finite samples.  相似文献   

3.
This paper develops an asymptotic theory for test statistics in linear panel models that are robust to heteroskedasticity, autocorrelation and/or spatial correlation. Two classes of standard errors are analyzed. Both are based on nonparametric heteroskedasticity autocorrelation (HAC) covariance matrix estimators. The first class is based on averages of HAC estimators across individuals in the cross-section, i.e. “averages of HACs”. This class includes the well known cluster standard errors analyzed by Arellano (1987) as a special case. The second class is based on the HAC of cross-section averages and was proposed by Driscoll and Kraay (1998). The ”HAC of averages” standard errors are robust to heteroskedasticity, serial correlation and spatial correlation but weak dependence in the time dimension is required. The “averages of HACs” standard errors are robust to heteroskedasticity and serial correlation including the nonstationary case but they are not valid in the presence of spatial correlation. The main contribution of the paper is to develop a fixed-b asymptotic theory for statistics based on both classes of standard errors in models with individual and possibly time fixed-effects dummy variables. The asymptotics is carried out for large time sample sizes for both fixed and large cross-section sample sizes. Extensive simulations show that the fixed-b approximation is usually much better than the traditional normal or chi-square approximation especially for the Driscoll-Kraay standard errors. The use of fixed-b critical values will lead to more reliable inference in practice especially for tests of joint hypotheses.  相似文献   

4.
5.
This paper considers parametric inference in a wide range of structural econometric models. It illustrates how the indirect inference principle can be used in the inference of these models. Specifically, we show that an ordinary least squares (OLS) estimation can be used as an auxiliary model, which leads to a method that is similar in spirit to a two-stage least squares (2SLS) estimator. Monte Carlo studies and an empirical analysis of timber sale auctions held in Oregon illustrate the usefulness and feasibility of our approach.  相似文献   

6.
In this paper we consider parametric deterministic frontier models. For example, the production frontier may be linear in the inputs, and the error is purely one-sided, with a known distribution such as exponential or half-normal. The literature contains many negative results for this model. Schmidt (Rev Econ Stat 58:238–239, 1976) showed that the Aigner and Chu (Am Econ Rev 58:826–839, 1968) linear programming estimator was the exponential MLE, but that this was a non-regular problem in which the statistical properties of the MLE were uncertain. Richmond (Int Econ Rev 15:515–521, 1974) and Greene (J Econom 13:27–56, 1980) showed how the model could be estimated by two different versions of corrected OLS, but this did not lead to methods of inference for the inefficiencies. Greene (J Econom 13:27–56, 1980) considered conditions on the distribution of inefficiency that make this a regular estimation problem, but many distributions that would be assumed do not satisfy these conditions. In this paper we show that exact (finite sample) inference is possible when the frontier and the distribution of the one-sided error are known up to the values of some parameters. We give a number of analytical results for the case of intercept only with exponential errors. In other cases that include regressors or error distributions other than exponential, exact inference is still possible but simulation is needed to calculate the critical values. We also discuss the case that the distribution of the error is unknown. In this case asymptotically valid inference is possible using subsampling methods.  相似文献   

7.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

8.
We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement—for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model.  相似文献   

9.
10.
There is compelling evidence that many macroeconomic and financial variables are not generated by linear models. This evidence is based on testing linearity against either smooth nonlinearity or piece-wise linearity, but there is no framework that encompasses both. This paper provides an econometric framework that allows for both breaks and smooth nonlinearity in between breaks. We estimate the unknown break-dates simultaneously with other parameters via nonlinear least-squares. Using new central limit results for nonlinear processes, we provide inference methods on break-dates and parameter estimates and several instability tests. We illustrate our methods via simulated and empirical smooth transition models with breaks.  相似文献   

11.
Maximum likelihood (ML) estimation of the autoregressive parameter of a dynamic panel data model with fixed effects is inconsistent under fixed time series sample size and large cross section sample size asymptotics. This paper proposes a general, computationally inexpensive method of bias reduction that is based on indirect inference, shows unbiasedness and analyzes efficiency. Monte Carlo studies show that our procedure achieves substantial bias reductions with only mild increases in variance, thereby substantially reducing root mean square errors. The method is compared with certain consistent estimators and is shown to have superior finite sample properties to the generalized method of moment (GMM) and the bias-corrected ML estimator.  相似文献   

12.
We propose two new tests for the specification of both the drift and the diffusion functions in a discretized version of a semiparametric continuous-time financial econometric model. Theoretically, we establish some asymptotic consistency results for the proposed tests. Practically, a simple selection procedure for the bandwidth parameter involved in each of the proposed tests is established based on the assessment of the power function of the test under study. To the best of our knowledge, this is the first approach of this kind in specification of continuous-time financial econometrics. The proposed theory is supported by good small and medium-sample studies.  相似文献   

13.
Inference for multiple-equation Markov-chain models raises a number of difficulties that are unlikely to appear in smaller models. Our framework allows for many regimes in the transition matrix, without letting the number of free parameters grow as the square as the number of regimes, but also without losing a convenient form for the posterior distribution. Calculation of marginal data densities is difficult in these high-dimensional models. This paper gives methods to overcome these difficulties, and explains why existing methods are unreliable. It makes suggestions for maximizing posterior density and initiating MCMC simulations that provide robustness against the complex likelihood shape.  相似文献   

14.
15.
Arnold  Bernhard F.  Gerke  Oke 《Metrika》2003,57(1):81-95
In this paper statistical tests with fuzzily formulated hypotheses are discussed, i.e., hypotheses H0 and H1 are fuzzy sets. The classical criteria of the errors of type I and type II are generalized, and this approach is applied to the linear hypothesis in the linear regression model. A sufficient condition to control both generalized criteria simultaneously is presented even in case of testing H0 against the omnibus alternative H1H0. This is completely different from the classical case of testing crisp complementary hypotheses.  相似文献   

16.
《Journal of econometrics》1986,33(3):341-365
This paper explores the specification and testing of some modified count data models. These alternatives permit more flexible specification of the data-generating process (dgp) than do familiar count data models (e.g., the Poisson), and provide a natural means for modeling data that are over- or underdispersed by the standards of the basic models. In the cases considered, the familiar forms of the distributions result as parameter-restricted versions of the proposed modified distributions. Accordingly, score tests of the restrictions that use only the easily-computed ML estimates of the standard models are proposed. The tests proposed by Hausman (1978) and White (1982) are also considered. The tests are then applied to count data models estimated using survey microdata on beverage consumption.  相似文献   

17.
The problem considered here is the ‘first-order’ identification of linear models, i.e., the identification of the coefficients of the variables appearing in the model. A general approach is proposed and very simple results are stated in this general context. A geometric interpretation of this approach is given. Classical results in this area are shown to be special cases of these general results.  相似文献   

18.
Under minimal assumptions, finite sample confidence bands for quantile regression models can be constructed. These confidence bands are based on the “conditional pivotal property” of estimating equations that quantile regression methods solve and provide valid finite sample inference for linear and nonlinear quantile models with endogenous or exogenous covariates. The confidence regions can be computed using Markov Chain Monte Carlo (MCMC) methods. We illustrate the finite sample procedure through two empirical examples: estimating a heterogeneous demand elasticity and estimating heterogeneous returns to schooling. We find pronounced differences between asymptotic and finite sample confidence regions in cases where the usual asymptotics are suspect.  相似文献   

19.
Inference in the inequality constrained normal linear regression model is approached as a problem in Bayesian inference, using a prior that is the product of a conventional uninformative distribution and an indicator function representing the inequality constraints. The posterior distribution is calculated using Monte Carlo numerical integration, which leads directly to the evaluation of expected values of functions of interest. This approach is compared with others that have been proposed. Three empirical examples illustrate the utility of the proposed methods using an inexpensive 32-bit microcomputer.  相似文献   

20.
We examine a consistent test for the correct specification of a regression function with dependent data. The test is based on the supremum of the difference between the parametric and nonparametric estimates of the regression model. Rather surprisingly, the behaviour of the test depends on whether the regressors are deterministic or stochastic. In the former situation, the normalization constants necessary to obtain the limiting Gumbel distribution are data dependent and difficult to estimate, so it may be difficult to obtain valid critical values, whereas, in the latter, the asymptotic distribution may not be even known. Because of that, under very mild regularity conditions, we describe a bootstrap analogue for the test, showing its asymptotic validity and finite sample behaviour in a small Monte-Carlo experiment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号