首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider nonlinear heteroscedastic single‐index models where the mean function is a parametric nonlinear model and the variance function depends on a single‐index structure. We develop an efficient estimation method for the parameters in the mean function by using the weighted least squares estimation, and we propose a “delete‐one‐component” estimator for the single‐index in the variance function based on absolute residuals. Asymptotic results of estimators are also investigated. The estimation methods for the error distribution based on the classical empirical distribution function and an empirical likelihood method are discussed. The empirical likelihood method allows for incorporation of the assumptions on the error distribution into the estimation. Simulations illustrate the results, and a real chemical data set is analyzed to demonstrate the performance of the proposed estimators.  相似文献   

2.
This paper presents a careful investigation of the three popular calibration weighting methods: (i) generalised regression; (ii) generalised exponential tilting and (iii) generalised pseudo empirical likelihood, with a major focus on computational aspects of the methods and some empirical evidences on calibrated weights. We also propose a simple weight trimming method for range‐restricted calibration. The finite sample behaviour of the weights obtained by the three calibration weighting methods and the effectiveness of the proposed weight trimming method are examined through limited simulation studies.  相似文献   

3.
Sample surveys are widely used to obtain information about totals, means, medians and other parameters of finite populations. In many applications, similar information is desired for subpopulations such as individuals in specific geographic areas and socio‐demographic groups. When the surveys are conducted at national or similarly high levels, a probability sampling can result in just a few sampling units from many unplanned subpopulations at the design stage. Cost considerations may also lead to low sample sizes from individual small areas. Estimating the parameters of these subpopulations with satisfactory precision and evaluating their accuracy are serious challenges for statisticians. To overcome the difficulties, statisticians resort to pooling information across the small areas via suitable model assumptions, administrative archives and census data. In this paper, we develop an array of small area quantile estimators. The novelty is the introduction of a semiparametric density ratio model for the error distribution in the unit‐level nested error regression model. In contrast, the existing methods are usually most effective when the response values are jointly normal. We also propose a resampling procedure for estimating the mean square errors of these estimators. Simulation results indicate that the new methods have superior performance when the population distributions are skewed and remain competitive otherwise.  相似文献   

4.
This paper considers two empirical likelihood-based estimation, inference, and specification testing methods for quantile regression models. First, we apply the method of conditional empirical likelihood (CEL) by Kitamura et al. [2004. Empirical likelihood-based inference in conditional moment restriction models. Econometrica 72, 1667–1714] and Zhang and Gijbels [2003. Sieve empirical likelihood and extensions of the generalized least squares. Scandinavian Journal of Statistics 30, 1–24] to quantile regression models. Second, to avoid practical problems of the CEL method induced by the discontinuity in parameters of CEL, we propose a smoothed counterpart of CEL, called smoothed conditional empirical likelihood (SCEL). We derive asymptotic properties of the CEL and SCEL estimators, parameter hypothesis tests, and model specification tests. Important features are (i) the CEL and SCEL estimators are asymptotically efficient and do not require preliminary weight estimation; (ii) by inverting the CEL and SCEL ratio parameter hypothesis tests, asymptotically valid confidence intervals can be obtained without estimating the asymptotic variances of the estimators; and (iii) in contrast to CEL, the SCEL method can be implemented by some standard Newton-type optimization. Simulation results demonstrate that the SCEL method in particular compares favorably with existing alternatives.  相似文献   

5.
This paper reviews methods for handling complex sampling schemes when analysing categorical survey data. It is generally assumed that the complex sampling scheme does not affect the specification of the parameters of interest, only the methodology for making inference about these parameters. The organisation of the paper is loosely chronological. Contingency table data are emphasised first before moving on to the analysis of unit‐level data. Weighted least squares methods, introduced in the mid 1970s along with methods for two‐way tables, receive early attention. They are followed by more general methods based on maximum likelihood, particularly pseudo maximum likelihood estimation. Point estimation methods typically involve the use of survey weights in some way. Variance estimation methods are described in broad terms. There is a particular emphasis on methods of testing. The main modelling methods considered are log‐linear models, logit models, generalised linear models and latent variable models. There is no coverage of multilevel models.  相似文献   

6.
贺飞燕 《价值工程》2006,25(8):167-168
本文在普通似然及经验似然情况下,分别对数据无结点和有结点的情况做了计算,得出在普通似然及经验似然问题中,有结点与无结点情况完全相同的结论。  相似文献   

7.
In this paper, we propose an estimator for the population mean when some observations on the study and auxiliary variables are missing from the sample. The proposed estimator is valid for any unequal probability sampling design, and is based upon the pseudo empirical likelihood method. The proposed estimator is compared with other estimators in a simulation study.  相似文献   

8.
We analyse the finite sample properties of maximum likelihood estimators for dynamic panel data models. In particular, we consider transformed maximum likelihood (TML) and random effects maximum likelihood (RML) estimation. We show that TML and RML estimators are solutions to a cubic first‐order condition in the autoregressive parameter. Furthermore, in finite samples both likelihood estimators might lead to a negative estimate of the variance of the individual‐specific effects. We consider different approaches taking into account the non‐negativity restriction for the variance. We show that these approaches may lead to a solution different from the unique global unconstrained maximum. In an extensive Monte Carlo study we find that this issue is non‐negligible for small values of T and that different approaches might lead to different finite sample properties. Furthermore, we find that the Likelihood Ratio statistic provides size control in small samples, albeit with low power due to the flatness of the log‐likelihood function. We illustrate these issues modelling US state level unemployment dynamics.  相似文献   

9.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

10.
In this paper, empirical likelihood inferences for varying-coefficient single-index model with right-censored data are investigated. By a synthetic data approach, we propose an empirical log-likelihood ratio function for the index parameters, which are of primary interest, and show that its limiting distribution is a mixture of central chi-squared distributions. In order that the Wilks’ phenomenon holds, we propose an adjusted empirical log-likelihood ratio for the index parameters. The adjusted empirical log-likelihood is shown to have a standard chi-squared limiting distribution. Simulation studies are undertaken to assess the finite sample performance of the proposed confidence intervals. A real example is presented for illustration.  相似文献   

11.
Abstract Since the seminal contribution of N. Gregory Mankiw, David Romer and David N. Weil in 1992 the growth empirics literature has used increasingly sophisticated methods to select relevant growth determinants in estimating cross‐section growth regressions. The vast majority of empirical approaches, however, limit cross‐country heterogeneity in production technology to the specification of total factor productivity, the ‘measure of our ignorance’. In this survey, we present two general empirical frameworks for cross‐country growth and productivity analysis and demonstrate that they encompass the various approaches in the growth empirics literature of the past two decades. We then develop our central argument, that cross‐country heterogeneity in the impact of observables and unobservables on output as well as the time‐series properties of the data are important for reliable empirical analysis.  相似文献   

12.
Two alternative robust estimation methods often employed by National Statistical Institutes in business surveys are two‐sided M‐estimation and one‐sided Winsorisation, which can be regarded as an approximate implementation of one‐sided M‐estimation. We review these methods and evaluate their performance in a simulation of a repeated rotating business survey based on data from the Retail Sales Inquiry conducted by the UK Office for National Statistics. One‐sided and two‐sided M‐estimation are found to have very similar performance, with a slight edge for the former for positive variables. Both methods considerably improve both level and movement estimators. Approaches for setting tuning parameters are evaluated for both methods, and this is a more important issue than the difference between the two approaches. M‐estimation works best when tuning parameters are estimated using historical data but is serviceable even when only live data is available. Confidence interval coverage is much improved by the use of a bootstrap percentile confidence interval.  相似文献   

13.
We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample size is not large, for instance smaller than about 1000, asymptotic distribution-free estimation methods are also not applicable. This paper assumes that the observed variables are transformed to normally distributed variables. The non-normally distributed variables are transformed with a Box–Cox function. Estimation of the model parameters and the transformation parameters is done by the maximum likelihood method. Furthermore, the test statistics (i.e. standard deviations) of these parameters are derived. This makes it possible to show the importance of the transformations. Finally, an empirical example is presented.  相似文献   

14.
This study examined the performance of two alternative estimation approaches in structural equation modeling for ordinal data under different levels of model misspecification, score skewness, sample size, and model size. Both approaches involve analyzing a polychoric correlation matrix as well as adjusting standard error estimates and model chi-squared, but one estimates model parameters with maximum likelihood and the other with robust weighted least-squared. Relative bias in parameter estimates and standard error estimates, Type I error rate, and empirical power of the model test, where appropriate, were evaluated through Monte Carlo simulations. These alternative approaches generally provided unbiased parameter estimates when the model was correctly specified. They also provided unbiased standard error estimates and adequate Type I error control in general unless sample size was small and the measured variables were moderately skewed. Differences between the methods in convergence problems and the evaluation criteria, especially under small sample and skewed variable conditions, were discussed.  相似文献   

15.
We consider estimation and testing of linkage equilibrium from genotypic data on a random sample of sibs, such as monozygotic and dizygotic twins. We compute the maximum likelihood estimator with an EM‐algorithm and a likelihood ratio statistic that takes the family structure into account. As we are interested in applying this to twin data we also allow observations on single children, so that monozygotic twins can be included. We allow non‐zero recombination fraction between the loci of interest, so that linkage disequilibrium between both linked and unlinked loci can be tested. The EM‐algorithm for computing the maximum likelihood estimator of the haplotype frequencies and the likelihood ratio test‐statistic, are described in detail. It is shown that the usual estimators of haplotype frequencies based on ignoring that the sibs are related are inefficient, and the likelihood ratio test for testing that the loci are in linkage disequilibrium.  相似文献   

16.
Wei Yu  Cuizhen Niu  Wangli Xu 《Metrika》2014,77(5):675-693
In this paper, we use the empirical likelihood method to make inferences for the coefficient difference of a two-sample linear regression model with missing response data. The commonly used empirical likelihood ratio is not concave for this problem, so we append a natural and well-explained condition to the likelihood function and propose three types of restricted empirical likelihood ratios for constructing the confidence region of the parameter in question. It can be demonstrated that all three empirical likelihood ratios have, asymptotically, chi-squared distributions. Simulation studies are carried out to show the effectiveness of the proposed approaches in aspects of coverage probability and interval length. A real data set is analysed with our methods as an example.  相似文献   

17.
The paper develops a general Bayesian framework for robust linear static panel data models usingε-contamination. A two-step approach is employed to derive the conditional type-II maximum likelihood (ML-II) posterior distribution of the coefficients and individual effects. The ML-II posterior means are weighted averages of the Bayes estimator under a base prior and the data-dependent empirical Bayes estimator. Two-stage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlak-type, Chamberlain-type and Hausman–Taylor-type models. The simulation results underscore the relatively good performance of the three-stage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specifications while conventional methods require separate estimators for each case.  相似文献   

18.
We give a new proof of the asymptotic normality of a class of linear functionals of the nonparametric maximum likelihood estimator (NPMLE) of a distribution function with "case 1" interval censored data. In particular our proof simplifies the proof of asymptotic normality of the mean given in Groeneboom and Wellner (1992). The proof relies strongly on a rate of convergence result due to van de Geer (1993), and methods from empirical process theory.  相似文献   

19.
微观计量分析中缺失数据的极大似然估计   总被引:3,自引:0,他引:3  
微观计量经济分析中常常遇到缺失数据,传统的处理方法是删除所要分析变量中的缺失数据,或用变量的均值替代缺失数据,这种方法经常造成样本有偏。极大似然估计方法可以有效地处理和估计缺失数据。本文首先介绍缺失数据的极大似然估计方法,然后对一实际调查数据中的缺失数据进行极大似然估计,并与传统处理方法的估计结果进行比较和评价。  相似文献   

20.
The notion of cointegration has led to a renewed interest in the identification and estimation of structural relations among economic time series. This paper reviews the different approaches that have been put forward in the literature for identifying cointegrating relationships and imposing (possibly over-identifying) restrictions on them. Next, various algorithms to obtain (approximate) maximum likelihood estimates and likelihood ratio statistics are reviewed, with an emphasis on so-called switching algorithms. The implementation of these algorithms is discussed and illustrated using an empirical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号