首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 57 毫秒
1.
This paper proposes several testing procedures for comparison of misspecified calibrated models. The proposed tests are of the Vuong-type (Vuong, 1989, Rivers and Vuong, 2002). In our framework, the econometrician selects values for model’s parameters in order to match some characteristics of data with those implied by the theoretical model. We assume that all competing models are misspecified, and suggest a test for the null hypothesis that they provide equivalent fit to data characteristics, against the alternative that one of the models is a better approximation. We consider both nested and non-nested cases. We also relax the dependence of models’ ranking on the choice of a weight matrix by suggesting averaged and sup-norm procedures. The methods are illustrated by comparing the cash-in-advance and portfolio adjustment cost models in their ability to match the impulse responses of output and inflation to money growth shocks.  相似文献   

2.
Social scientists often consider multiple empirical models of the same process. When these models are parametric and non-nested, the null hypothesis that two models fit the data equally well is commonly tested using methods introduced by Vuong (Econometrica 57(2):307–333, 1989) and Clarke (Am J Political Sci 45(3):724–744, 2001; J Confl Resolut 47(1):72–93, 2003; Political Anal 15(3):347–363, 2007). The objective of each is to compare the Kullback–Leibler Divergence (KLD) of the two models from the true model that generated the data. Here we show that both of these tests are based upon a biased estimator of the KLD, the individual log-likelihood contributions, and that the Clarke test is not proven to be consistent for the difference in KLDs. As a solution, we derive a test based upon cross-validated log-likelihood contributions, which represent an unbiased KLD estimate. We demonstrate the CVDM test’s superior performance via simulation, then apply it to two empirical examples from political science. We find that the test’s selection can diverge from those of the Vuong and Clarke tests and that this can ultimately lead to differences in substantive conclusions.  相似文献   

3.
This paper analyzes the higher-order asymptotic properties of generalized method of moments (GMM) estimators for linear time series models using many lags as instruments. A data-dependent moment selection method based on minimizing the approximate mean squared error is developed. In addition, a new version of the GMM estimator based on kernel-weighted moment conditions is proposed. It is shown that kernel-weighted GMM estimators can reduce the asymptotic bias compared to standard GMM estimators. Kernel weighting also helps to simplify the problem of selecting the optimal number of instruments. A feasible procedure similar to optimal bandwidth selection is proposed for the kernel-weighted GMM estimator.  相似文献   

4.
We focus on the importance of the assumptions regarding how inefficiency should be incorporated into the specification of the data generating process in an examination of a sector's production or efficiency. Drawing on the literature on non-nested hypothesis testing, we find that the model selection approach of Vuong (1989) is a potentially useful tool for identifying the best specification before carrying out such studies. We include an empirical application using panel data on Spanish dairy farms where we estimate cost frontiers under different specifications of how inefficiency enters the data generating process (in particular, efficiency is introduced as an input-oriented, output-oriented and hyperbolic parameter). Our results show that the different models yield very different pictures of the technology and the efficiency levels of the sector, illustrating the importance of choosing the most correct model before carrying out production and efficiency analyses. The Vuong test shows that the input-oriented model is the best among the models we use, whereas the output-oriented model is the worst. This is consistent with the fact that the input- and output-oriented models provide the most and least credible estimates of scale economies given the structure of the sector.  相似文献   

5.
We adapt the Bierens (1990) test to the I-regular models of Park and Phillips (2001). Bierens (1990) defines the test hypothesis in terms of a conditional moment condition. Under the null hypothesis, the moment condition holds with probability one. The probability measure used is that induced by the variables in the model, that are assumed to be strictly stationary. Our framework is nonstationary and this approach is not always applicable. We show that the Lebesgue measure can be used instead in a meaningful way. The resultant test is consistent against all I-regular alternatives.  相似文献   

6.
This paper examines the asymptotic and finite‐sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (Econometrica 1989; 57 : 307–333). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out‐of‐sample version of the two‐step testing procedure recommended by Vuong but also show that an exact one‐step procedure is sometimes applicable. When the models are overlapping, we provide a simple‐to‐use fixed‐regressor wild bootstrap that can be used to conduct valid inference. Monte Carlo simulations generally support the theoretical results: the two‐step procedure is conservative, while the one‐step procedure can be accurately sized when appropriate. We conclude with an empirical application comparing the predictive content of credit spreads to growth in real stock prices for forecasting US real gross domestic product growth. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
We discuss how to test the specification of an ordered discrete choice model against a general alternative. Two main approaches can be followed: tests based on moment conditions and tests based on comparisons between parametric and nonparametric estimations. Following these approaches, various statistics are proposed and their asymptotic properties are discussed. The performance of the statistics is compared by means of simulations. An easy-to-compute variant of the standard moment-based statistic yields the best results in models with a single explanatory variable. In models with various explanatory variables the results are less conclusive, since the relative performance of the statistics depends on both the fit of the model and the type of misspecification that is considered.  相似文献   

8.
Using a four-month panel of revised Current Population Survey data from September–December 1993, we extend the class of semiparametric hazard models of the type first studied by Prentice and Gloeckler ( 1978 ), and brought to the attention of economists by Meyer ( 1988 , 1990 ), to incorporate inequality restrictions on the shape of the hazard. This extension enables us to test hypotheses regarding the shape of the hazard implied by search theory using duration data alone. These tests provide another link between the empirical and theoretical literatures on unemployment duration and job search. The GHK probability simulator makes it straightforward to generate approximate hypothesis test results, as simulation estimates of the probability under the null hypothesis are generated using the asymptotic normal approximation to the distribution of the hazard parameters obtained from maximum likelihood estimation. Importance sampling is used to conduct inference under the null and obtain exact finite sample estimates of the probability the null is satisfied. A new algorithm for maintaining stability of the importance weights is also developed. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
10.
In this paper, we introduce the one-step generalized method of moments (GMM) estimation methods considered in Lee (2007a) and Liu, Lee, and Bollinger (2010) to spatial models that impose a spatial moving average process for the disturbance term. First, we determine the set of best linear and quadratic moment functions for GMM estimation. Second, we show that the optimal GMM estimator (GMME) formulated from this set is the most efficient estimator within the class of GMMEs formulated from the set of linear and quadratic moment functions. Our analytical results show that the one-step GMME can be more efficient than the quasi maximum likelihood (QMLE), when the disturbance term is simply i.i.d. With an extensive Monte Carlo study, we compare its finite sample properties against the MLE, the QMLE and the estimators suggested in Fingleton (2008a).  相似文献   

11.
Announcement     
In this paper we obtain a small-disturbance approximation to the moment matrix of the limiting distribution of an operational generalized least squares (OGLS) estimator of the mean response vector in a random coefficient model.It is shown that for small samples the moment matrix of the limiting distribution underestimates the small-disturbance approximate moment matrix of the limiting distribution of the OGLS estimator. This suggests that for small samples the ‘standard errors’ of the OGLS estimates should be obtained from the small-disturbance approximate moment matrix of the limiting distribution rather than from the conventional asymptotic moment matrix.  相似文献   

12.
We propose non-nested hypothesis tests for conditional moment restriction models based on the method of generalized empirical likelihood (GEL). By utilizing the implied GEL probabilities from a sequence of unconditional moment restrictions that contains equivalent information of the conditional moment restrictions, we construct Kolmogorov–Smirnov and Cramér–von Mises type moment encompassing tests. Advantages of our tests over Otsu and Whang’s (2011) tests are: (i) they are free from smoothing parameters, (ii) they can be applied to weakly dependent data, and (iii) they allow non-smooth moment functions. We derive the null distributions, validity of a bootstrap procedure, and local and global power properties of our tests. The simulation results show that our tests have reasonable size and power performance in finite samples.  相似文献   

13.
This paper will present a Bayes factor for the comparison of an inequality constrained hypothesis with its complement or an unconstrained hypothesis. Equivalent sets of hypotheses form the basis for the quantification of the complexity of an inequality constrained hypothesis. It will be shown that the prior distribution can be chosen such that one of the terms in the Bayes factor is the quantification of the complexity of the hypothesis of interest. The other term in the Bayes factor represents a measure of the fit of the hypothesis. Using a vague prior distribution this fit value is essentially determined by the data. The result is an objective Bayes factor.  相似文献   

14.
A nonparametric method for comparing multiple forecast models is developed and implemented. The hypothesis of Optimal Predictive Ability generalizes the Superior Predictive Ability hypothesis from a single given loss function to an entire class of loss functions. Distinction is drawn between General Loss functions, Convex Loss functions, and Symmetric Convex Loss functions. The research hypothesis is formulated in terms of moment inequality conditions. The empirical moment conditions are reduced to an exact and finite system of linear inequalities based on piecewise-linear loss functions. The hypothesis can be tested in a statistically consistent way using a blockwise Empirical Likelihood Ratio test statistic. A computationally feasible test procedure computes the test statistic using Convex Optimization methods, and estimates conservative, data-dependent critical values using a majorizing chi-square limit distribution and a moment selection method. An empirical application to inflation forecasting reveals that a very large majority of thousands of forecast models are redundant, leaving predominantly Phillips Curve-type models, when convexity and symmetry are assumed.  相似文献   

15.
Abstract

This paper develops a unified framework for fixed effects (FE) and random effects (RE) estimation of higher-order spatial autoregressive panel data models with spatial autoregressive disturbances and heteroscedasticity of unknown form in the idiosyncratic error component. We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define both an RE and an FE spatial generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators and derive their joint asymptotic distribution, which is robust to heteroscedasticity of unknown form in the idiosyncratic error component. Finally, we derive a robust Hausman test of the spatial random against the spatial FE model.  相似文献   

16.
This paper studies the Hodges and Lehmann (1956) optimality of tests in a general setup. The tests are compared by the exponential rates of growth to one of the power functions evaluated at a fixed alternative while keeping the asymptotic sizes bounded by some constant. We present two sets of sufficient conditions for a test to be Hodges–Lehmann optimal. These new conditions extend the scope of the Hodges–Lehmann optimality analysis to setups that cannot be covered by other conditions in the literature. The general result is illustrated by our applications of interest: testing for moment conditions and overidentifying restrictions. In particular, we show that (i) the empirical likelihood test does not necessarily satisfy existing conditions for optimality but does satisfy our new conditions; and (ii) the generalized method of moments (GMM) test and the generalized empirical likelihood (GEL) tests are Hodges–Lehmann optimal under mild primitive conditions. These results support the belief that the Hodges–Lehmann optimality is a weak asymptotic requirement.  相似文献   

17.
We study the problem of testing hypotheses on the parameters of one- and two-factor stochastic volatility models (SV), allowing for the possible presence of non-regularities such as singular moment conditions and unidentified parameters, which can lead to non-standard asymptotic distributions. We focus on the development of simulation-based exact procedures–whose level can be controlled in finite samples–as well as on large-sample procedures which remain valid under non-regular conditions. We consider Wald-type, score-type and likelihood-ratio-type tests based on a simple moment estimator, which can be easily simulated. We also propose a C(α)-type test which is very easy to implement and exhibits relatively good size and power properties. Besides usual linear restrictions on the SV model coefficients, the problems studied include testing homoskedasticity against a SV alternative (which involves singular moment conditions under the null hypothesis) and testing the null hypothesis of one factor driving the dynamics of the volatility process against two factors (which raises identification difficulties). Three ways of implementing the tests based on alternative statistics are compared: asymptotic critical values (when available), a local Monte Carlo (or parametric bootstrap) test procedure, and a maximized Monte Carlo (MMC) procedure. The size and power properties of the proposed tests are examined in a simulation experiment. The results indicate that the C(α)-based tests (built upon the simple moment estimator available in closed form) have good size and power properties for regular hypotheses, while Monte Carlo tests are much more reliable than those based on asymptotic critical values. Further, in cases where the parametric bootstrap appears to fail (for example, in the presence of identification problems), the MMC procedure easily controls the level of the tests. Moreover, MMC-based tests exhibit relatively good power performance despite the conservative feature of the procedure. Finally, we present an application to a time series of returns on the Standard and Poor’s Composite Price Index.  相似文献   

18.
Person–organization fit (P–O fit) is an important and often-researched variable, which sheds light on the way employees perceive their relationship with the organization they work for. In this study, two different assessments of P–O fit are compared, i.e. actual fit (an indirect measurement based on the comparison of organizational and personal values or characteristics) and perceived fit (a direct measurement involving employees' own estimations of their P–O fit). The four quadrants of the Competing Values Framework (CVF) are used to investigate which values have the strongest influence on employees' fit perceptions. In a polynomial regression analysis, the predictive power of the indirect fit measure on the direct fit measure is tested in a sample of two organizations (hospital n1 = 222; chemical plant n2 = 550). The results show that of the four CVF quadrants human relations values have the strongest predictive power for employees' fit perceptions and rational goal values contribute least. In the discussion section, special attention will be paid to the measurement of individual values as the results raise important methodological questions.  相似文献   

19.
M. C. Pardo 《Metrika》2011,73(2):231-253
Based on f{\phi } -divergences an estimator of the generalized linear models for multinomial data under linear restrictions on the parameters is considered. New test statistics, also based on f{\phi } -divergences are considered as alternatives to the classical ones for testing a hypothesis about linear restrictions on the parameters. The asymptotic distribution of them is obtained under the null hypothesis as well as under contiguous local hypotheses. An application of the estimators and the tests is illustrated in a numerical example and in simulation studies.  相似文献   

20.
This paper studies the Bahadur efficiency of empirical likelihood for testing moment condition models. It is shown that under mild regularity conditions, the empirical likelihood overidentifying restriction test is Bahadur efficient, i.e., its pp-value attains the fastest convergence rate under each fixed alternative hypothesis. Analogous results are derived for parameter hypothesis testing and set inference problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号