首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
ML–estimation of regression parameters with incomplete covariate information usually requires a distributional assumption regarding the concerned covariates that implies a source of misspecification. Semiparametric procedures avoid such assumptions at the expense of efficiency. In this paper a simulation study with small sample size is carried out to get an idea of the performance of the ML–estimator under misspecification and to compare it with the semiparametric procedures when the former is based on a correct assumption. The results show that there is only a little gain by correct parametric assumptions, which does not justify the possibly large bias when the assumptions are not met. Additionally, a simple modification of the complete case estimator appears to be nearly semiparametric efficient.  相似文献   

2.
Although each statistical unit on which measurements are taken is unique, typically there is not enough information available to account totally for its uniqueness. Therefore, heterogeneity among units has to be limited by structural assumptions. One classical approach is to use random effects models, which assume that heterogeneity can be described by distributional assumptions. However, inference may depend on the assumed mixing distribution, and it is assumed that the random effects and the observed covariates are independent. An alternative considered here is fixed effect models, which let each unit has its own parameter. They are quite flexible but suffer from the large number of parameters. The structural assumption made here is that there are clusters of units that share the same effects. It is shown how clusters can be identified by tailored regularised estimators. Moreover, it is shown that the regularised estimates compete well with estimates for the random effects model, even if the latter is the data generating model. They dominate if clusters are present.  相似文献   

3.
This paper appliesa large number of models to three previously-analyzed data sets,and compares the point estimates and confidence intervals fortechnical efficiency levels. Classical procedures include multiplecomparisons with the best, based on the fixed effects estimates;a univariate version, marginal comparisons with the best; bootstrappingof the fixed effects estimates; and maximum likelihood givena distributional assumption. Bayesian procedures include a Bayesianversion of the fixed effects model, and various Bayesian modelswith informative priors for efficiencies. We find that fixedeffects models generally perform poorly; there is a large payoffto distributional assumptions for efficiencies. We do not findmuch difference between Bayesian and classical procedures, inthe sense that the classical MLE based on a distributional assumptionfor efficiencies gives results that are rather similar to a Bayesiananalysis with the corresponding prior.  相似文献   

4.
Covariate Measurement Error in Quadratic Regression   总被引:3,自引:0,他引:3  
We consider quadratic regression models where the explanatory variable is measured with error. The effect of classical measurement error is to flatten the curvature of the estimated function. The effect on the observed turning point depends on the location of the true turning point relative to the population mean of the true predictor. Two methods for adjusting parameter estimates for the measurement error are compared. First, two versions of regression calibration estimation are considered. This approximates the model between the observed variables using the moments of the true explanatory variable given its surrogate measurement. For certain models an expanded regression calibration approximation is exact. The second approach uses moment-based methods which require no assumptions about the distribution of the covariates measured with error. The estimates are compared in a simulation study, and used to examine the sensitivity to measurement error in models relating income inequality to the level of economic development. The simulations indicate that the expanded regression calibration estimator dominates the other estimators when its distributional assumptions are satisfied. When they fail, a small-sample modification of the method-of-moments estimator performs best. Both estimators are sensitive to misspecification of the measurement error model.  相似文献   

5.
A framework for the detection of change points in the expectation in sequences of random variables is presented. Specifically, we investigate time series with general distributional assumptions that may show an unknown number of change points in the expectation occurring on multiple time scales and that may also contain change points in other parameters. To that end we propose a multiple filter test (MFT) that tests the null hypothesis of constant expectation and, in case of rejection of the null hypothesis, an algorithm that estimates the change points.The MFT has three important benefits. First, it allows for general distributional assumptions in the underlying model, assuming piecewise sequences of i.i.d. random variables, where also relaxations with regard to identical distribution or independence are possible. Second, it uses a MOSUM type statistic and an asymptotic setting in which the MOSUM process converges weakly to a functional of a Brownian motion which is then used to simulate the rejection threshold of the statistical test. This approach enables a simultaneous application of multiple MOSUM processes which improves the detection of change points that occur on different time scales. Third, we also show that the method is practically robust against changes in other distributional parameters such as the variance or higher order moments which might occur with or even without a change in expectation. A function implementing the described test and change point estimation is available in the R package MFT.  相似文献   

6.
The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy‐to‐implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root‐n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data‐generating processes. We then provide an empirical illustration where we estimate the effect of health insurance on doctor visits. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.  相似文献   

7.
Several optimum non-parametric tests for heteroscedasticity are proposed and studied along with the tests introduced in the literature in terms of power and robustness properties. It is found that all tests are reasonably robust to the Ordinary Least Squares (OLS) residual estimates, number and character of the regressors. Only a few are robust to both the distributional and independence assumptions about the errors. The power of tests can be improved with the OLS residual estimates, the increased sample size and the variability of the regressors. It can be substantially reduced if the observations are not normally distributed, and may increase or decrease if the errors are dependent. Each test is optimum to detect a specific form of heteroscedasticity and a serious power loss may occur if the underlying heteroscedasticity assumption in the data generation deviates from it.  相似文献   

8.
This paper introduces a class of robust estimators of the parameters of a stochastic utility function. Existing maximum likelihood and regression estimation methods require the assumption of a particular distributional family for the random component of utility. In contrast, estimators of the ‘maximum score’ class require only weak distributional assumptions for consistency. Following presentation and proof of the basic consistency theorem, additional results are given. An algorithm for achieving maximum score estimates and some small sample Monte Carlo tests are also described.  相似文献   

9.
《Labour economics》2007,14(1):73-98
Regression models of wage determination are typically estimated by ordinary least squares using the logarithm of the wage as the dependent variable. These models provide consistent estimates of the proportional impact of wage determinants only under the assumption that the distribution of the error term is independent of the regressors — an assumption that can be violated by the presence of heteroskedasticity, for example. Failure of this assumption is particularly relevant in the estimation of the impact of union status on wages. Alternative wage-equation estimators based on the use of quasi-maximum-likelihood methods are consistent under weaker assumptions about the dependence between the error term and the regressors. They also provide the ability to check the specification of the underlying wage model. Applying this approach to a standard data set, I find that the impact of unions on wages is overstated by a magnitude of 20-30 percent when estimates from log-wage regressions are used for inference.  相似文献   

10.
Abstract

This paper develops a unified framework for fixed effects (FE) and random effects (RE) estimation of higher-order spatial autoregressive panel data models with spatial autoregressive disturbances and heteroscedasticity of unknown form in the idiosyncratic error component. We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define both an RE and an FE spatial generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators and derive their joint asymptotic distribution, which is robust to heteroscedasticity of unknown form in the idiosyncratic error component. Finally, we derive a robust Hausman test of the spatial random against the spatial FE model.  相似文献   

11.
《Journal of econometrics》2003,113(2):289-335
This paper empirically implements a dynamic, stochastic model of life-cycle labor supply and human capital investment. The model allows agents to be forward looking. But, in contrast to prior literature in this area, it does not require that expectations be formed “rationally”. By avoiding strong assumptions about expectations, I avoid sources of bias stemming from misspecification of the expectation process. A Bayesian econometric method based on Geweke and Keane (in: R.S. Mariano, T. Schuermann, M. Weeks (Eds.), Simulation Based Inference and Econometrics: Methods and Applications, Cambridge University Press, Cambridge, 1999) is used to relax assumptions over expectations. The results of this study are consistent with findings from previous research in the labor supply literature that makes the rational expectations assumption.  相似文献   

12.
Helpman, Melitz and Rubinstein [Quarterly Journal of Economics (2008) Vol. 123, pp. 441–487] (HMR) present a rich theoretical model to study the determinants of bilateral trade flows across countries. The model is then empirically implemented through a two‐stage estimation procedure. We argue that this estimation procedure is only valid under the strong distributional assumptions maintained in the article. Statistical tests using the HMR sample, however, clearly reject such assumptions. Moreover, we perform numerical experiments which show that the HMR two‐stage estimator is very sensitive to departures from the assumption of homoskedasticity. These findings cast doubts on any inference drawn from the empirical implementation of the HMR model.  相似文献   

13.
This paper reports empirical evidence on the sensitivity of unemployment duration regression estimates to distributional assumptions and to time aggregation. The results indicate that parameter estimates are robust to distributional assumptions, while estimates of duration dependence are not. Time aggregation does not seem to have drastic effects on the estimates in a simple parametric model like the Weibull, but can produce dramatic changes in the more complicated extended generalized gamma model. Semiparametric models for grouped data produce stable estimates, and perform much better than continuous-time models in terms of significance at high levels of time aggregation.  相似文献   

14.
Abstract.  A quantitative survey of 24 studies containing 99 national estimates of unemployment persistence reinstates unemployment hysteresis as a viable falsifying hypothesis to the natural rate hypothesis. Empirical evidence to the contrary may be attributed to small-sample, misspecification and publication biases. Larger estimates of unemployment persistence are produced by models that use more information ( t  = 9.03; P  < 0.0001) and are better specified. A theme of bias and misspecification among studies that are more supportive of natural rate hypothesis emerges in several independent ways. The nonstationarity of the unemployment rate is confirmed both by the observed rate of convergence of persistence estimates across the empirical literature and by the point towards which they converge. The natural rate hypothesis may be regarded as 'falsified' should we choose to do so.  相似文献   

15.
We propose the indirect inference estimator as a consistent method to estimate the parameters of a structural model when the observed series are contaminated by measurement error by considering the noise as a structural feature. We show that the indirect inference estimates are asymptotically biased if the error is neglected. When the condition for identification is satisfied, the structural and measurement error parameters can be consistently estimated. The issues of identification and misspecification of measurement error are discussed in detail. We illustrate the reliability of this procedure in the estimation of stochastic volatility models based on realized volatility measures contaminated by microstructure noise.  相似文献   

16.
This paper examines the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis. In the literature, only two statistical models have been proposed in which second-stage regressions are well-defined and meaningful. In the model considered by Simar and Wilson (J Prod Anal 13:49–78, 2007), truncated regression provides consistent estimation in the second stage, where as in the model proposed by Banker and Natarajan (Oper Res 56: 48–58, 2008a), ordinary least squares (OLS) provides consistent estimation. This paper examines, compares, and contrasts the very different assumptions underlying these two models, and makes clear that second-stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability. In addition, we show that in either case, bootstrap methods provide the only feasible means for inference in the second stage. We also comment on ad hoc specifications of second-stage regression equations that ignore the part of the data-generating process that yields data used to obtain the initial DEA estimates.  相似文献   

17.
The power of standard panel cointegration statistics may be affected by misspecification errors if structural breaks in the parameters generating the process are not considered. In addition, the presence of cross‐section dependence among the panel units can distort the empirical size of the statistics. We therefore design a testing procedure that allows for both structural breaks and cross‐section dependence when testing the null hypothesis of no cointegration. The paper proposes test statistics that can be used when one or both features are present. We illustrate our proposal by analysing the pass‐through of import prices on a sample of European countries. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
This paper introduces tests for residual serial correlation in cointegrating regressions. The tests are devised in the frequency domain by using the spectral measure estimates. The asymptotic distributions of the tests are derived and test consistency is established. The asymptotic distributions are obtained by using the assumptions and methods that are different from those used in Grenander and Rosenblatt (1957) and Durlauf (1991). Small-scale simulation results are reported to illustrate the finite sample performance of the tests under various distributional assumptions on the data generating process. The distributions considered are normal and t-distributions. The tests are shown to have stable size at sample sizes as large as 50 or 100. Additionally, it is shown that the tests are reasonably powerful against the ARMA residuals. An empirical application of the tests to investigate the ‘weak-form’ efficiency in the foreign exchange market is also reported.  相似文献   

19.
We evaluate the performance of several volatility models in estimating one-day-ahead Value-at-Risk (VaR) of seven stock market indices using a number of distributional assumptions. Because all returns series exhibit volatility clustering and long range memory, we examine GARCH-type models including fractionary integrated models under normal, Student-t and skewed Student-t distributions. Consistent with the idea that the accuracy of VaR estimates is sensitive to the adequacy of the volatility model used, we find that AR (1)-FIAPARCH (1,d,1) model, under a skewed Student-t distribution, outperforms all the models that we have considered including widely used ones such as GARCH (1,1) or HYGARCH (1,d,1). The superior performance of the skewed Student-t FIAPARCH model holds for all stock market indices, and for both long and short trading positions. Our findings can be explained by the fact that the skewed Student-t FIAPARCH model can jointly accounts for the salient features of financial time series: fat tails, asymmetry, volatility clustering and long memory. In the same vein, because it fails to account for most of these stylized facts, the RiskMetrics model provides the least accurate VaR estimation. Our results corroborate the calls for the use of more realistic assumptions in financial modeling.  相似文献   

20.
Abstract

It is known that the discretisation of continuous-time models can introduce chaotic behaviour, even when this is not consistent with observations or even the model's assumptions. We propose generic dynamics describing discrete-time core-periphery models that comply with the established assumptions in the literature and are consistent with observed behaviour. The desired properties of the dynamics are proved analytically in the general case. We also give particular forms for the dynamics for those interested in applying our model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号