首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Suppose that the econometrician is interested in comparing two misspecified moment restriction models, where the comparison is performed in terms of some chosen measure of fit. This paper is concerned with describing an optimal test of the Vuong (1989) and Rivers and Vuong (2002) type null hypothesis that the two models are equivalent under the given measure of fit (the ranking may vary for different measures). We adopt the generalized Neyman–Pearson optimality criterion, which focuses on the decay rates of the type I and II error probabilities under fixed non-local alternatives, and derive an optimal but practically infeasible test. Then, as an illustration, by considering the model comparison hypothesis defined by the weighted Euclidean norm of moment restrictions, we propose a feasible approximate test statistic to the optimal one and study its asymptotic properties. Local power properties, one-sided test, and comparison under the generalized empirical likelihood-based measure of fit are also investigated. A simulation study illustrates that our approximate test is more powerful than the Rivers–Vuong test.  相似文献   

2.
This paper addresses the problem of fitting a known density to the marginal error density of a stationary long memory moving average process when its mean is known and unknown. In the case of unknown mean, when mean is estimated by the sample mean, the first order difference between the residual empirical and null distribution functions is known to be asymptotically degenerate at zero, and hence can not be used to fit a distribution up to an unknown mean. In this paper we show that by using a suitable class of estimators of the mean, this first order degeneracy does not occur. We also investigate the large sample behavior of tests based on an integrated square difference between kernel type error density estimators and the expected value of the error density estimator based on errors. The asymptotic null distributions of suitably standardized test statistics are shown to be chi-square with one degree of freedom in both cases of the known and unknown mean. In addition, we discuss the consistency and asymptotic power against local alternatives of the density estimator based test in the case of known mean. A finite sample simulation study of the test based on residual empirical process is also included.  相似文献   

3.
Perron检验是一种考虑结构突变的单位根检验方法,检验统计量的分布依赖于数据生成过程中所包含的确定性趋势和所选取的检验回归式;而在实证分析中真实的数据生成过程是未知的,这使得单位根检验缺乏必要依据,因而探寻科学有效的单位根检验程序是受到广泛关注的问题。基于此,本文在"IO模型"分析框架下,依据Perron检验提出了一套考虑结构突变的单位根检验程序,并通过蒙特卡洛模拟分析了该程序在有限样本情形下的表现。本研究完善了带有结构突变的单位根检验理论,为实证分析提供了有益的建议和参考。  相似文献   

4.
This paper deals with models for the duration of an event that are misspecified by the neglect of random multiplicative heterogeneity in the hazard function. This type of misspecification has been widely discussed in the literature [e.g., Heckman and Singer (1982), Lancaster and Nickell (1980)], but no study of its effect on maximum likelihood estimators has been given. This paper aims to provide such a study with particular reference to the Weibull regression model which is by far the most frequently used parametric model [e.g., Heckman and Borjas (1980), Lancaster (1979)]. In this paper we define generalised errors and residuals in the sense of Cox and Snell (1968, 1971) and show how their use materially simplifies the analysis of both true and misspecified duration models. We show that multiplicative heterogeneity in the hazard of the Weibull model has two errors in variables interpretations. We give the exact asymptotic inconsistency of M.L. estimation in the Weibull model and give a general expression for the inconsistency of M.L. estimators due to neglected heterogeneity for any duration model to O(σ2), where σ2 is the variance of the error term. We also discuss the information matrix test for neglected heterogeneity in duration models and consider its behaviour when σ2>0.  相似文献   

5.
The argument that is put forward in this paper is that failure to represent stochastic trend and stochastic seasonality in an AIDS model leads to a misspecified and possibly structurally unstable model. This proposition is verified by estimating an AIDS model of the demand for alcoholic beverages in the United Kingdom. Three versions of the model are estimated, and it is demonstrated that the version allowing for stochastic trend and stochastic seasonality performs better than the other two versions of the model in terms of the diagnostics tests and goodness of fit measures. The best estimated model turns out to possess the properties of having common components and being homogenous. Further empirical testing reveals the presence of stochastic trends and cointegration between the budget shares of beer and wine. The results clearly indicate that there has been a shift away from the consumption of beer towards wine. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper, we present a practical methodology for variance estimation for multi‐dimensional measures of poverty and deprivation of households and individuals, derived from sample surveys with complex designs and fairly large sample sizes. The measures considered are based on fuzzy representation of individuals' propensity to deprivation in monetary and diverse non‐monetary dimensions. We believe this to be the first original contribution for estimating standard errors for such fuzzy poverty measures. The second objective is to describe and numerically illustrate computational procedures and difficulties in producing reliable and robust estimates of sampling error for such complex statistics. We attempt to identify some of these problems and provide solutions in the context of actual situations. A detailed application based on European Union Statistics on Income and Living Conditions data for 19 NUTS2 regions in Spain is provided.  相似文献   

7.
We propose a simple estimator for nonlinear method of moment models with measurement error of the classical type when no additional data, such as validation data or double measurements, are available. We assume that the marginal distributions of the measurement errors are Laplace (double exponential) with zero means and unknown variances and the measurement errors are independent of the latent variables and are independent of each other. Under these assumptions, we derive simple revised moment conditions in terms of the observed variables. They are used to make inference about the model parameters and the variance of the measurement error. The results of this paper show that the distributional assumption on the measurement errors can be used to point identify the parameters of interest. Our estimator is a parametric method of moments estimator that uses the revised moment conditions and hence is simple to compute. Our estimation method is particularly useful in situations where no additional data are available, which is the case in many economic data sets. Simulation study demonstrates good finite sample properties of our proposed estimator. We also examine the performance of the estimator in the case where the error distribution is misspecified.  相似文献   

8.
This paper proposes several testing procedures for comparison of misspecified calibrated models. The proposed tests are of the Vuong-type (Vuong, 1989, Rivers and Vuong, 2002). In our framework, the econometrician selects values for model’s parameters in order to match some characteristics of data with those implied by the theoretical model. We assume that all competing models are misspecified, and suggest a test for the null hypothesis that they provide equivalent fit to data characteristics, against the alternative that one of the models is a better approximation. We consider both nested and non-nested cases. We also relax the dependence of models’ ranking on the choice of a weight matrix by suggesting averaged and sup-norm procedures. The methods are illustrated by comparing the cash-in-advance and portfolio adjustment cost models in their ability to match the impulse responses of output and inflation to money growth shocks.  相似文献   

9.
Abstract

The spatial Durbin model occupies an interesting position in the field of spatial econometrics. It is the reduced form of a model with cross-sectional dependence in the errors and it may be used as the nesting equation in a more general approach of model selection. Specifically, in this equation we obtain the common factor tests (of which the likelihood ratio is the best known) whose objective is to discriminate between substantive and residual dependence in an apparently misspecified equation. Our paper tries to delve deeper into the role of the spatial Durbin model in the problem of specifying a spatial econometric model. We include a Monte Carlo study related to the performance of the common factor tests presented in the paper in small sample sizes.  相似文献   

10.
Ruggiero (European Journal of Operational Research 115, 555–563. 1999) compared the two popular parametric frontier methods for cross-sectional data—the stochastic frontier and the corrected OLS—in a simulation study. He demonstrated that the inefficiency ranking accuracy of the established stochastic frontier is uniformly inferior to that of the misspecified Corrected OLS (COLS) (which lacks an error term). The reason for his result remains unclear, however. In this paper, a more extensive simulation study is therefore conducted to find out whether the superiority of COLS is simply due to small sample sizes or to poor performance of the inefficiency level estimator.JEL Classification: C1,C2,C5  相似文献   

11.
This paper studies subsampling VAR tests of linear constraints as a way of finding approximations of their finite sample distributions that are valid regardless of the stochastic nature of the data generating processes for the tests. In computing the VAR tests with subsamples (i.e., blocks of consecutive time series), both the tests of the original form and the tests with the subsample OLS coefficient estimates centered at the full-sample estimates are used. Subsampling using the latter is called centered subsampling in this paper. It is shown that the subsamplings provide asymptotic distributions that are equivalent to the asymptotic distributions of the VAR tests. In addition, the tests using critical values from the subsamplings are shown to be consistent. The subsampling methods are applied to testing for causality. To choose the block sizes for subsample causality tests, the minimum volatility method, a new simulation-based calibration rule and a bootstrap-based calibration rule are used. Simulation results in this paper indicate that the centered subsampling using the simulation-based calibration rule for the block size is quite promising. It delivers stable empirical size and reasonably high-powered causality tests. Moreover, when the causality test has a chi-square distribution in the limit, the test using critical values from the centered subsampling has better size properties than the one using chi-square critical values. The centered subsampling using the bootstrap-based calibration rule for the block size also works well, but it is slightly inferior to that using the simulation-based calibration rule.  相似文献   

12.
This paper considers the problem of testing statistical hypothesis in nonlinear regression models with inequality constraints on the parameters. First, the Kuhn-Tucker test procedure is defined. Next, it is shown that the distribution of the Kuhn-Tucker, the likelihood ratio and the Wald test statistics converges to the same mixture of chi-square distributions under the null hypothesis. To illustrate these results two examples are considered: (1) the problem of testing that individual effects are missing in an error component model, and (2) the problem of testing equilibrium for a model of markets in disequilibrium.  相似文献   

13.
The statistical power and Type I error rate of several homogeneity tests, usually applied in meta-analysis, are compared using Monte Carlo simulation: (1) The chi-square test applied to standardized mean differences, correlation coefficients, and Fisher's r-to-Z transformations, and (2) S&H-75 (and 90 percent) procedure applied to standardized mean differences and correlation coefficients. Chi-square tests adjusted correctly Type I error rates to the nominal significance level while the S&H procedures showed higher rates; consequently, the S&H procedures presented greater statistical power. In all conditions, the statistical power was very low, particularly when the sample had few studies, small sample sizes, and presented short differences between the parametric effect sizes. Finally, the criteria for selecting homogeneity tests are discussed.  相似文献   

14.
We consider the implications for forecast accuracy of imposing unit roots and cointegrating restrictions in linear systems of I(1) variables in levels, differences, and cointegrated combinations. Asymptotic formulae are obtained for multi-step forecast error variances for each representation. Alternative measures of forecast accuracy are discussed. Finite sample behaviour in a bivariate model is studied by Monte Carlo using control variables. We also analyse the interaction between unit roots and cointegrating restrictions and intercepts in the DGP. Some of the issues are illustrated with an empirical example of forecasting the demand for M1 in the UK.  相似文献   

15.
《Journal of econometrics》1987,35(1):143-159
This paper examines the behavior of forecasts made from a co-integrated system as introduced by Granger (1981), Granger and Weiss (1983) and Engle and Granger (1987). It is established that a multi-step forecast will satisfy the co-integrating relation exactly and that this particular linear combination of forecasts will have a finite limiting forecast error variance. A simulation study compares the multi-step forecast accuracy of unrestricted vector autoregression with the two-step estimation of the vector autoregression imposing the co-integration restriction.To test whether a system exhibits co-integration, the procedures introduced in Engle and Granger (1987) are extended to allow different sample sizes and numbers of variables.  相似文献   

16.
A desirable property of a forecast is that it encompasses competing predictions, in the sense that the accuracy of the preferred forecast cannot be improved through linear combination with a rival prediction. In this paper, we investigate the impact of the uncertainty associated with estimating model parameters in‐sample on the encompassing properties of out‐of‐sample forecasts. Specifically, using examples of non‐nested econometric models, we show that forecasts from the true (but estimated) data generating process (DGP) do not encompass forecasts from competing mis‐specified models in general, particularly when the number of in‐sample observations is small. Following this result, we also examine the scope for achieving gains in accuracy by combining the forecasts from the DGP and mis‐specified models.  相似文献   

17.
Propensity score matching has become a popular method for the estimation of average treatment effects. In empirical applications, researchers almost always impose a parametric model for the propensity score. This practice raises the possibility that the model for the propensity score is misspecified and therefore the propensity score matching estimator of the average treatment effect may be inconsistent. We show that the common practice of calculating estimates of the densities of the propensity score conditional on the participation decision provides a means for examining whether the propensity score is misspecified. In particular, we derive a restriction between the density of the propensity score among participants and the density among nonparticipants. We show that this restriction between the two conditional densities is equivalent to a particular orthogonality restriction and derive a formal test based upon it. The resulting test is shown via a simulation study to have dramatically greater power than competing tests for many alternatives. The principal disadvantage of this approach is loss of power against some alternatives.  相似文献   

18.
This paper aims to demonstrate a possible aggregation gain in predicting future aggregates under a practical assumption of model misspecification. Empirical analysis of a number of economic time series suggests that the use of the disaggregate model is not always preferred over the aggregate model in predicting future aggregates, in terms of an out-of-sample prediction root-mean-square error criterion. One possible justification of this interesting phenomena is model misspecification. In particular, if the model fitted to the disaggregate series is misspecified (i.e., not the true data generating mechanism), then the forecast made by a misspecified model is not always the most efficient. This opens up an opportunity for the aggregate model to perform better. It will be of interest to find out when the aggregate model helps. In this paper, we study a framework where the underlying disaggregate series has a periodic structure. We derive and compare the efficiency loss in linear prediction of future aggregates using the adapted disaggregate model and aggregate model. Some scenarios for aggregation gain to occur are identified. Numerical results show that the aggregate model helps over a fairly large region in the parameter space of the periodic model that we studied.  相似文献   

19.
Errors of measurement have long been recognized as a chronic problem in statistical analysis. Although there is a vast statistical literature of multiple regression models estimating the air pollution-mortality relationship, this problem has been largely ignored. It is well known that pollution measures contain error, but the consequences of this error for regression estimates is not known. We use Lave and Seskin's air pollution model to demonstrate the consequences of random measurement error. We assume a range of 0% to 50% of the variance of the pollution measures is due to error. We find large differences in the estimated effects on mortality of the pollution variables as well as the other explanatory variables once this measurement error is taken into account. These results cast doubt on the usual regression estimates of the mortality effects of air pollution. More generally our results demonstrate the consequences of random measurement error in the explanatory variable of a multiple regression analysis and the misleading conclusions that may result in policy research if this error is ignored.  相似文献   

20.
In this paper we find a new test of goodness of fit in the case of discrete random variables. The main advantage of the methodology proposed in this paper relies on the fact that given the sample, we can control the probability of the type I error, that is α, and then find the exact value of the probability of the type II error, β, associated, in some cases. The results are not asymptotic, but exact. Also a conditional test for two alternatives is obtained. We also include some simulations in order to check the power of the procedures.Mathematics Subject Classification (2000) Primary 62G10 · 62B05 · Secondary 62E10  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号