首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A growing literature has been advocating consistent kernel estimation of integrated variance in the presence of financial market microstructure noise. We find that, for realistic sample sizes encountered in practice, the asymptotic results derived for the proposed estimators may provide unsatisfactory representations of their finite sample properties. In addition, the existing asymptotic results might not offer sufficient guidance for practical implementations. We show how to optimize the finite sample properties of kernel-based integrated variance estimators. Empirically, we find that their suboptimal implementation can, in some cases, lead to little or no finite sample gains when compared to the classical realized variance estimator. Significant statistical and economic gains can, however, be recovered by using our proposed finite sample methods.  相似文献   

2.
We present finite sample evidence on different IV estimators available for linear models under weak instruments; explore the application of the bootstrap as a bias reduction technique to attenuate their finite sample bias; and employ three empirical applications to illustrate and provide insights into the relative performance of the estimators in practice. Our evidence indicates that the random‐effects quasi‐maximum likelihood estimator outperforms alternative estimators in terms of median point estimates and coverage rates, followed by the bootstrap bias‐corrected version of LIML and LIML. However, our results also confirm the difficulty of obtaining reliable point estimates in models with weak identification and moderate‐size samples. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
We consider estimation and testing of linkage equilibrium from genotypic data on a random sample of sibs, such as monozygotic and dizygotic twins. We compute the maximum likelihood estimator with an EM‐algorithm and a likelihood ratio statistic that takes the family structure into account. As we are interested in applying this to twin data we also allow observations on single children, so that monozygotic twins can be included. We allow non‐zero recombination fraction between the loci of interest, so that linkage disequilibrium between both linked and unlinked loci can be tested. The EM‐algorithm for computing the maximum likelihood estimator of the haplotype frequencies and the likelihood ratio test‐statistic, are described in detail. It is shown that the usual estimators of haplotype frequencies based on ignoring that the sibs are related are inefficient, and the likelihood ratio test for testing that the loci are in linkage disequilibrium.  相似文献   

4.
The classical stochastic frontier panel data models provide no mechanism to disentangle individual time invariant unobserved heterogeneity from inefficiency. Greene (2005a, b) proposed the so-called “true” fixed-effects specification that distinguishes these two latent components. However, due to the incidental parameters problem, his maximum likelihood estimator may lead to biased variance estimates. We propose two alternative estimators that achieve consistency for n with fixed T. Furthermore, we extend the Chen et al. (2014) results providing a feasible estimator when the inefficiency is heteroskedastic and follows a first-order autoregressive process. We investigate the behavior of the proposed estimators through Monte Carlo simulations showing good finite sample properties, especially in small samples. An application to hospitals’ technical efficiency illustrates the usefulness of the new approach.  相似文献   

5.
We consider moment based estimation methods for estimating parameters of the negative binomial distribution that are almost as efficient as maximum likelihood estimation and far superior to the celebrated zero term method and the standard method of moments estimator. Maximum likelihood estimators are difficult to compute for dependent samples such as samples generated from the negative binomial first-order autoregressive integer-valued processes. The power method of estimation is suggested as an alternative to maximum likelihood estimation for such samples and a comparison is made of the asymptotic normalized variance between the power method, method of moments and zero term method estimators.  相似文献   

6.
Two‐step nonparametric estimators have become standard in empirical auctions. A drawback concerns boundary effects which cause inconsistencies near the endpoints of the support and bias in finite samples. To cope, sample trimming is typically used, which leads to non‐random data loss. Monte Carlo experiments show this leads to poor performance near the support boundaries and on the interior due to bandwidth selection issues. We propose a modification that employs boundary correction techniques, and we demonstrate substantial improvement in finite‐sample performance. We implement the new estimator using oil lease auctions data and find that trimming masks a substantial degree of bidder asymmetry and inefficiency in allocations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Suppose independent random samples are drawn from k (2) populations with a common location parameter and unequal scale parameters. We consider the problem of estimating simultaneously the hazard rates of these populations. The analogues of the maximum likelihood (ML), uniformly minimum variance unbiased (UMVU) and the best scale equivariant (BSE) estimators for the one population case are improved using Rao‐Blackwellization. The improved version of the BSE estimator is shown to be the best among these estimators. Finally, a class of estimators that dominates this improved estimator is obtained using the differential inequality approach.  相似文献   

8.
We consider the problem of estimating parametric multivariate density models when unequal amounts of data are available on each variable. We focus in particular on the case that the unknown parameter vector may be partitioned into elements relating only to a marginal distribution and elements relating to the copula. In such a case we propose using a multi‐stage maximum likelihood estimator (MSMLE) based on all available data rather than the usual one‐stage maximum likelihood estimator (1SMLE) based only on the overlapping data. We provide conditions under which the MSMLE is not less asymptotically efficient than the 1SMLE, and we examine the small sample efficiency of the estimators via simulations. The analysis in this paper is motivated by a model of the joint distribution of daily Japanese yen–US dollar and euro–US dollar exchange rates. We find significant evidence of time variation in the conditional copula of these exchange rates, and evidence of greater dependence during extreme events than under the normal distribution. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
This paper studies likelihood-based estimation and inference in parametric discontinuous threshold regression models with i.i.d. data. The setup allows heteroskedasticity and threshold effects in both mean and variance. By interpreting the threshold point as a “middle” boundary of the threshold variable, we find that the Bayes estimator is asymptotically efficient among all estimators in the locally asymptotically minimax sense. In particular, the Bayes estimator of the threshold point is asymptotically strictly more efficient than the left-endpoint maximum likelihood estimator and the newly proposed middle-point maximum likelihood estimator. Algorithms are developed to calculate asymptotic distributions and risk for the estimators of the threshold point. The posterior interval is proved to be an asymptotically valid confidence interval and is attractive in both length and coverage in finite samples.  相似文献   

10.
Choosing instrumental variables in conditional moment restriction models   总被引:1,自引:0,他引:1  
Properties of GMM estimators are sensitive to the choice of instrument. Using many instruments leads to high asymptotic asymptotic efficiency but can cause high bias and/or variance in small samples. In this paper we develop and implement asymptotic mean square error (MSE) based criteria for instrument selection in estimation of conditional moment restriction models. The models we consider include various nonlinear simultaneous equations models with unknown heteroskedasticity. We develop moment selection criteria for the familiar two-step optimal GMM estimator (GMM), a bias corrected version, and generalized empirical likelihood estimators (GEL), that include the continuous updating estimator (CUE) as a special case. We also find that the CUE has lower higher-order variance than the bias-corrected GMM estimator, and that the higher-order efficiency of other GEL estimators depends on conditional kurtosis of the moments.  相似文献   

11.
In this article, we consider estimating the timing of a break in level and/or trend when the order of integration and autocorrelation properties of the data are unknown. For stationary innovations, break point estimation is commonly performed by minimizing the sum of squared residuals across all candidate break points, using a regression of the levels of the series on the assumed deterministic components. For unit root processes, the obvious modification is to use a first differenced version of the regression, while a further alternative in a stationary autoregressive setting is to consider a GLS‐type quasi‐differenced regression. Given uncertainty over which of these approaches to adopt in practice, we develop a hybrid break fraction estimator that selects from the levels‐based estimator, the first‐difference‐based estimator, and a range of quasi‐difference‐based estimators, according to which achieves the global minimum sum of squared residuals. We establish the asymptotic properties of the estimators considered, and compare their performance in practically relevant sample sizes using simulation. We find that the new hybrid estimator has desirable asymptotic properties and performs very well in finite samples, providing a reliable approach to break date estimation without requiring decisions to be made regarding the autocorrelation properties of the data.  相似文献   

12.
We consider exact procedures for testing the equality of means (location parameters) of two Laplace populations with equal scale parameters based on corresponding independent random samples. The test statistics are based on either the maximum likelihood estimators or the best linear unbiased estimators of the Laplace parameters. By conditioning on certain quantities we manage to express their exact distributions as mixtures of ratios of linear combinations of standard exponential random variables. This allows us to find their exact quantiles and tabulate them for several sample sizes. The powers of the tests are compared either numerically or by simulation. Exact confidence intervals for the difference of the means corresponding to those tests are also constructed. The exact procedures are illustrated via a real data example.  相似文献   

13.
Generalized least squares estimators, with estimated variance-covariance matrices, and maximum likelihood estimators have been proposed in the literature to deal with the problem of estimating autoregressive models with autocorrelated disturbances. In this paper we compare the small sample efficiencies of these estimators with those of some approximate Bayes estimators. The comparison is done with the help of a sampling experiment applied to a model specification. Though these Bayes estimators utilize very weak prior information, they out-perform the sampling theory estimators in every case we consider.  相似文献   

14.
There has been considerable and controversial research over the past two decades into how successfully random effects misspecification in mixed models (i.e. assuming normality for the random effects when the true distribution is non‐normal) can be diagnosed and what its impacts are on estimation and inference. However, much of this research has focused on fixed effects inference in generalised linear mixed models. In this article, motivated by the increasing number of applications of mixed models where interest is on the variance components, we study the effects of random effects misspecification on random effects inference in linear mixed models, for which there is considerably less literature. Our findings are surprising and contrary to general belief: for point estimation, maximum likelihood estimation of the variance components under misspecification is consistent, although in finite samples, both the bias and mean squared error can be substantial. For inference, we show through theory and simulation that under misspecification, standard likelihood ratio tests of truly non‐zero variance components can suffer from severely inflated type I errors, and confidence intervals for the variance components can exhibit considerable under coverage. Furthermore, neither of these problems vanish asymptotically with increasing the number of clusters or cluster size. These results have major implications for random effects inference, especially if the true random effects distribution is heavier tailed than the normal. Fortunately, simple graphical and goodness‐of‐fit measures of the random effects predictions appear to have reasonable power at detecting misspecification. We apply linear mixed models to a survey of more than 4 000 high school students within 100 schools and analyse how mathematics achievement scores vary with student attributes and across different schools. The application demonstrates the sensitivity of mixed model inference to the true but unknown random effects distribution.  相似文献   

15.
We consider efficient estimation in moment conditions models with non‐monotonically missing‐at‐random (MAR) variables. A version of MAR point‐identifies the parameters of interest and gives a closed‐form efficient influence function that can be used directly to obtain efficient semi‐parametric generalized method of moments (GMM) estimators under standard regularity conditions. A small‐scale Monte Carlo experiment with MAR instrumental variables demonstrates that the asymptotic superiority of these estimators over the standard methods carries over to finite samples. An illustrative empirical study of the relationship between a child's years of schooling and number of siblings indicates that these GMM estimators can generate results with substantive differences from standard methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
交互效应面板数据模型在社会经济问题的实证分析中具有很强的适用性,但现有研究主要集中于线性面板模型。本文将交互效应引入非线性的面板截取模型,并基于ECM算法,建立了有效估计量和识别程序。基于不同因子类型的仿真实验结果显示,ECM算法可以很好地识别面板截取样本中的非观测因子。ECM估计量具有良好的有限样本性质,与其他估计量相比具有更小的偏误和更快的收敛速度。尤其是当共同因子为低频平滑因子时,其表现最为理想。  相似文献   

17.
In this paper we develop estimation techniques and a specification test for the validity of instrumental variables allowing for conditionally heteroskedastic disturbances. We propose modified two‐stage least squares (2SLS) and modified 3SLS procedures where the conditional heteroskedasticity is taken into account, which are natural extensions of the traditional 2SLS and 3SLS estimators and which achieve a lower variance. We recommend the use of these modified 2SLS and 3SLS procedures in practice instead of alternative estimators like limited‐information maximum likelihood/full‐information maximum likelihood, where the non‐existence of moments leads to extreme values, and also for ease of computation. It is shown theoretically and with simulation that in some cases 2SLS, 3SLS and our modified 2SLS and 3SLS procedures can have very severe biases (including the weak instruments case), and we present bias correction procedures to apply in practice along the lines of Flores‐Lagunes ( 2007 ). Our new estimation procedures can also be used to extend the test for weak instruments of Stock and Yogo ( 2005 ) and to allow for conditional heteroskedasticity. Finally, we show the usefulness of our estimation procedures with an application to the demand and supply of fish. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
We consider nonlinear heteroscedastic single‐index models where the mean function is a parametric nonlinear model and the variance function depends on a single‐index structure. We develop an efficient estimation method for the parameters in the mean function by using the weighted least squares estimation, and we propose a “delete‐one‐component” estimator for the single‐index in the variance function based on absolute residuals. Asymptotic results of estimators are also investigated. The estimation methods for the error distribution based on the classical empirical distribution function and an empirical likelihood method are discussed. The empirical likelihood method allows for incorporation of the assumptions on the error distribution into the estimation. Simulations illustrate the results, and a real chemical data set is analyzed to demonstrate the performance of the proposed estimators.  相似文献   

19.
In this paper, we introduce a new stationary integer-valued autoregressive process of the first order with zero truncated Poisson marginal distribution. We consider some properties of this process, such as autocorrelations, spectral density and multi-step ahead conditional expectation, variance and probability generating function. Stationary solution and its uniqueness are obtained with a discussion to strict stationarity and ergodicity of such process. We estimate the unknown parameters by using conditional least squares estimation, nonparametric estimation and maximum likelihood estimation. The asymptotic properties and asymptotic distributions of the conditional least squares estimators have been investigated. Some numerical results of the estimators are presented and some sample paths of the process are illustrated. Some possible applications of the introduced model are discussed.  相似文献   

20.
Using Monte Carlo simulations we study the small sample performance of the traditional TSLS, the LIML and four new jackknife IV estimators when the instruments are weak. We find that the new estimators and LIML have a smaller bias but a larger variance than the TSLS. In terms of root mean square error, neither LIML nor the new estimators perform uniformly better than the TSLS. The main conclusion from the simulations and an empirical application on labour supply functions is that in a situation with many weak instruments, there still does not exist an easy way to obtain reliable estimates in small samples. Better instruments and/or larger samples is the only way to increase precision in the estimates. Since the properties of the estimators are specific to each data-generating process and sample size it would be wise in empirical work to complement the estimates with a Monte Carlo study of the estimators' properties for the relevant sample size and data-generating process believed to be applicable. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号