首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Choosing instrumental variables in conditional moment restriction models   总被引:1,自引:0,他引:1  
Properties of GMM estimators are sensitive to the choice of instrument. Using many instruments leads to high asymptotic asymptotic efficiency but can cause high bias and/or variance in small samples. In this paper we develop and implement asymptotic mean square error (MSE) based criteria for instrument selection in estimation of conditional moment restriction models. The models we consider include various nonlinear simultaneous equations models with unknown heteroskedasticity. We develop moment selection criteria for the familiar two-step optimal GMM estimator (GMM), a bias corrected version, and generalized empirical likelihood estimators (GEL), that include the continuous updating estimator (CUE) as a special case. We also find that the CUE has lower higher-order variance than the bias-corrected GMM estimator, and that the higher-order efficiency of other GEL estimators depends on conditional kurtosis of the moments.  相似文献   

2.
This paper studies the Hodges and Lehmann (1956) optimality of tests in a general setup. The tests are compared by the exponential rates of growth to one of the power functions evaluated at a fixed alternative while keeping the asymptotic sizes bounded by some constant. We present two sets of sufficient conditions for a test to be Hodges–Lehmann optimal. These new conditions extend the scope of the Hodges–Lehmann optimality analysis to setups that cannot be covered by other conditions in the literature. The general result is illustrated by our applications of interest: testing for moment conditions and overidentifying restrictions. In particular, we show that (i) the empirical likelihood test does not necessarily satisfy existing conditions for optimality but does satisfy our new conditions; and (ii) the generalized method of moments (GMM) test and the generalized empirical likelihood (GEL) tests are Hodges–Lehmann optimal under mild primitive conditions. These results support the belief that the Hodges–Lehmann optimality is a weak asymptotic requirement.  相似文献   

3.
Codependent cycles   总被引:1,自引:0,他引:1  
This paper extends the work of Engle and Kozicki (1993) to test for co-movement in multiple time series when their cycles are not exactly synchronized. We call these codependent cycles and show that testing and estimation in this case will be a Generalized Method of Moments test and estimation procedure. We also show that the Tiao and Tsay (1985) proposed test for scalar components models of order (0, q) can be seen as a test for codependent cycles based on a consistent, but sub-optimal, estimate of the cofeature vector. We assess the small sample performance of the proposed tests through a series of simulations. Finally we apply this test to investigate comovement between durable and non-durable consumption expenditures.  相似文献   

4.
Many applied researchers have to deal with spatially autocorrelated residuals (SAR). Available tests that identify spatial spillovers as captured by a significant SAR parameter, are either based on maximum likelihood (MLE) or generalized method of moments (GMM) estimates. This paper illustrates the properties of various tests for the null hypothesis of a zero SAR parameter in a comprehensive Monte Carlo study. The main finding is that Wald tests generally perform well regarding both size and power even in small samples. The GMM-based Wald test is correctly sized even for non-normally distributed disturbances and small samples, and it exhibits a similar power as its MLE-based counterpart. Hence, for the applied researcher the GMM Wald test can be recommended, because it is easy to implement.  相似文献   

5.
Nonlinear regression models have been widely used in practice for a variety of time series and cross-section datasets. For purposes of analyzing univariate and multivariate time series data, in particular, smooth transition regression (STR) models have been shown to be very useful for representing and capturing asymmetric behavior. Most STR models have been applied to univariate processes, and have made a variety of assumptions, including stationary or cointegrated processes, uncorrelated, homoskedastic or conditionally heteroskedastic errors, and weakly exogenous regressors. Under the assumption of exogeneity, the standard method of estimation is nonlinear least squares. The primary purpose of this paper is to relax the assumption of weakly exogenous regressors and to discuss moment-based methods for estimating STR models. The paper analyzes the properties of the STR model with endogenous variables by providing a diagnostic test of linearity of the underlying process under endogeneity, developing an estimation procedure and a misspecification test for the STR model, presenting the results of Monte Carlo simulations to show the usefulness of the model and estimation method, and providing an empirical application for inflation rate targeting in Brazil. We show that STR models with endogenous variables can be specified and estimated by a straightforward application of existing results in the literature.  相似文献   

6.
We address the issue of modelling and forecasting macroeconomic variables using rich datasets by adopting the class of Vector Autoregressive Moving Average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (IOLS) estimator. We establish the consistency and asymptotic distribution of the estimator for weak and strong VARMA(p,q) models. Monte Carlo results show that IOLS is consistent and feasible for large systems, outperforming the MLE and other linear regression based efficient estimators under alternative scenarios. Our empirical application shows that VARMA models are feasible alternatives when forecasting with many predictors. We show that VARMA models outperform the AR(1), ARMA(1,1), Bayesian VAR, and factor models, considering different model dimensions.  相似文献   

7.
We discuss a regression model in which the regressors are dummy variables. The basic idea is that the observation units can be assigned to some well-defined combination of treatments, corresponding to the dummy variables. This assignment can not be done without some error, i.e. misclassification can play a role. This situation is analogous to regression with errors in variables. It is well-known that in these situations identification of the parameters is a prominent problem. We will first show that, in our case, the parameters are not identified by the first two moments but can be identified by the likelihood. Then we analyze two estimators. The first is a moment estimator involving moments up to the third order, and the second is a maximum likelihood estimator calculated with the help of the EM algorithm. Both estimators are evaluated on the basis of a small Monte Carlo experiment.  相似文献   

8.
ABSTRACT

Observations recorded on ‘locations’ usually exhibit spatial dependence. In an effort to take into account both the spatial dependence and the possible underlying non-linear relationship, a partially linear single-index spatial regression model is proposed. This paper establishes the estimators of the unknowns. Moreover, it builds a generalized F-test to determine whether or not the data provide evidence on using linear settings in empirical studies. Their asymptotic properties are derived. Monte Carlo simulations indicate that the estimators and test statistic perform well. The analysis of Chinese house price data shows the existence of both spatial dependence and a non-linear relationship.  相似文献   

9.
Heteroskedasticity-robust semi-parametric GMM estimation of a spatial model with space-varying coefficients. Spatial Economic Analysis. The spatial model with space-varying coefficients proposed by Sun et al. in 2014 has proved to be useful in detecting the location effects of the impacts of covariates as well as spatial interaction in empirical analysis. However, Sun et al.’s estimator is inconsistent when heteroskedasticity is present – a circumstance that is more realistic in certain applications. In this study, we propose a kind of semi-parametric generalized method of moments (GMM) estimator that is not only heteroskedasticity robust but also takes a closed form written explicitly in terms of observed data. We derive the asymptotic distributions of our estimators. Moreover, the results of Monte Carlo experiments show that the proposed estimators perform well in finite samples.  相似文献   

10.
The paper considers some properties of measures of asymmetry and peakedness of one dimensional distributions. It points to some misconceptions of the first and the second Pearson coefficients, the measures of asymetry and shape, that frequently occur in introductory textbooks. Also it presents different ways for obtaining the estimated values for the coefficients of skewness and kurtosis and statistical tests which include them.  相似文献   

11.
Rubio (2020) points out an identification problem for the four-parameter family of two-piece asymmetric densities introduced by Nassiri & Loris (2013). This implies that statistical inference for that family is problematic. Establishing probabilistic properties for this four-parameter family however still makes sense. For the three-parameter family, there is no identification problem. The main contribution in Gijbels et al. (2019a) is to provide asymptotic results for maximum likelihood and method-of-moments estimators for all members of the three-parameter quantile-based asymmetric family of distributions.  相似文献   

12.
The expected return to equity – typically measured as a historical average – is a key variable in the decision making of investors. A recent literature uses analysts' forecasts, investor surveys or present-value relationships and finds estimates of expected returns that are sometimes much lower than historical averages. This study extends the present-value approach to a dynamic optimizing framework. Given a model that captures this relationship, one can use data on dividends, earnings and valuations to infer the model-implied expected return. Using this method, the estimated expected real return to equity ranges from 4.9% to 5.6% . Furthermore, the analysis indicates that expected returns have declined by about 3 percentage points over the past 40 years. These results indicate that future returns to equity may be lower than past realized returns.  相似文献   

13.
Consider a string of n positions, i.e. a discrete string of length n . Units of length k are placed at random on this string in such a way that they do not overlap, and as often as possible, i.e. until all spacings between neighboring units have length less than k . When centered and scaled by n −1/2 the resulting numbers of spacings of length 1, 2,…,  k −1 have simultaneously a limiting normal distribution as n →∞. This is proved by the classical method of moments.  相似文献   

14.
According to the law of likelihood, statistical evidence is represented by likelihood functions and its strength measured by likelihood ratios. This point of view has led to a likelihood paradigm for interpreting statistical evidence, which carefully distinguishes evidence about a parameter from error probabilities and personal belief. Like other paradigms of statistics, the likelihood paradigm faces challenges when data are observed incompletely, due to non-response or censoring, for instance. Standard methods to generate likelihood functions in such circumstances generally require assumptions about the mechanism that governs the incomplete observation of data, assumptions that usually rely on external information and cannot be validated with the observed data. Without reliable external information, the use of untestable assumptions driven by convenience could potentially compromise the interpretability of the resulting likelihood as an objective representation of the observed evidence. This paper proposes a profile likelihood approach for representing and interpreting statistical evidence with incomplete data without imposing untestable assumptions. The proposed approach is based on partial identification and is illustrated with several statistical problems involving missing data or censored data. Numerical examples based on real data are presented to demonstrate the feasibility of the approach.  相似文献   

15.
Typically, a Poisson model is assumed for count data. In many cases, there are many zeros in the dependent variable, thus the mean is not equal to the variance value of the dependent variable. Therefore, Poisson model is not suitable anymore for this kind of data because of too many zeros. Thus, we suggest using a hurdle‐generalized Poisson regression model. Furthermore, the response variable in such cases is censored for some values because of some big values. A censored hurdle‐generalized Poisson regression model is introduced on count data with many zeros in this paper. The estimation of regression parameters using the maximum likelihood method is discussed and the goodness‐of‐fit for the regression model is examined. An example and a simulation will be used to illustrate the effects of right censoring on the parameter estimation and their standard errors.  相似文献   

16.
In this paper, we provide a detailed study of a general family of asymmetric densities. In the general framework, we establish expressions for important characteristics of the distributions and discuss estimation of the parameters via method‐of‐moments as well as maximum likelihood estimation. Asymptotic normality results for the estimators are provided. The results under the general framework are then applied to some specific examples of asymmetric densities. The use of the asymmetric densities is illustrated in a real‐data analysis.  相似文献   

17.
D. S. Tu 《Metrika》1991,38(1):269-283
The test for the hypothesis that the mortality in the observed group is the same as that of a reference group by subject-years method is considered in this paper. We prove a Berry-Esséen type theorem for the test statistics studied in Berry (1983), which gives an upper bound for the convergence rates of test statistics to their limiting distributions.  相似文献   

18.
This paper examines criminal choice using a variant of the human capital model. The innovation of our approach is that it attempts to disaggregate individual capital, not unlike production-based studies which disaggregate physical capital into equipment and structures. We disaggregate an individual’s capital stock into the standard human capital component as well as a utility generating component that we call social capital. In our set-up, social capital is used to account for the influence of social norms on the decision to participate in crime. This is done by modeling the stigma of arrest as a reduction in the individual’s social capital stock. We also allow individuals to account for the impact of their criminal actions on their probability of arrest. In order to estimate the structural parameters underlying the model, we make use of computationally intensive methods involving simulated generalized method of moments and value function approximation. The empirical results, based on panel data from the Delinquency in a Birth Cohort II Study, support the social capital model of crime and reveal significant state dependence in the decision to participate in crime.  相似文献   

19.
We evaluate the Fisher information (FI) contained in a collection of order statistics and their concomitants from a bivariate random sample. Special attention is given to Type II censored samples. We present a general decomposition result and recurrence relations that are useful in finding the FI in all types of censored samples. We also obtain some asymptotic results for the FI. For the bivariate normal parent, we obtain explicit and asymptotic expressions for the elements of the FI matrix for Type II censored samples. We discuss implications of our findings on inference on the bivariate normal parameters, especially on the correlation. The first author’s research was supported in part by National Institutes of Health, USA, Grant # M01 RR00034 and the second author’s research was supported by a training grant from the Egyptian government  相似文献   

20.
文章提出了一种解决专家系统中用户查询与知识库中规则不一致的方法,该方法结合了数据挖掘系统中的关联分类法和1-近邻法两种方法,并通过使用一个实例验证了该方法的实用性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号