首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 78 毫秒
1.
微观计量分析中缺失数据的极大似然估计   总被引:3,自引:0,他引:3  
微观计量经济分析中常常遇到缺失数据,传统的处理方法是删除所要分析变量中的缺失数据,或用变量的均值替代缺失数据,这种方法经常造成样本有偏。极大似然估计方法可以有效地处理和估计缺失数据。本文首先介绍缺失数据的极大似然估计方法,然后对一实际调查数据中的缺失数据进行极大似然估计,并与传统处理方法的估计结果进行比较和评价。  相似文献   

2.
李凤 《价值工程》2011,30(25):289-290
基于逐次定数截尾样本下,讨论了两参数Weibull分布的参数估计,得到了两参数的逆矩估计.并利用模拟方法与极大似然估计作比较,模拟结果表明逆矩估计优于极大似然估计。  相似文献   

3.
本文给出了在定数截尾情形,在锥序约束α1λ1≤λ2≤α2λ1,α1>0,α2>0,α1≤α2条件下,两指数总体均值λ的约束极大似然估计λi,i=1,2。证明了λi具有比常用估计量Si更小的均方误差,并且给出了λi对Si的渐进效率,i=1,2。  相似文献   

4.
空间动态面板模型拟极大似然估计的渐近效率改进   总被引:2,自引:0,他引:2  
Lee和Yu(2008)研究了一类同时带个体与时间固定效应的空间动态面板模型的拟极大似然估计量的大样本性质.本文说明当扰动项非正态时,拟极大似然估计量的渐近效率可以被进一步提高.为此,我们构造了一组合待定矩阵且形式一般的矩条件以用来包含对数似然函数一阶条件的特殊形式.从无冗余矩条件的角度,选取最优待定矩阵得到了最佳广义矩估计量.本文证明了当扰动项正态分布时,最佳广义矩估计量和拟极大似然估计量渐近等价;当扰动项非正态分布时,广义矩估计量具有比拟极大似然估计量更高的渐近效率.Monte Carlo实验结果与本文的理论预期一致.  相似文献   

5.
荆源 《价值工程》2011,30(26):315-315
讨论了定数截尾样本下,指数分布环境因子的极大似然估计和区间估计,为研究估计的精度,运用随机模拟方法,对环境因子的置信区间的精度进行了讨论。  相似文献   

6.
本文提出一种新的有效的列选主元QR预处理算法,对线性模型下的整型参数估计问题进行预处理.该预处理算法基于列选主元QR分解,采用迭代过程来获得预处理整型矩阵.由该预处理法得到的上三角矩阵因子R能有效地降低求解整型参数估计问题的时间复杂度,尤其是对高维问题.  相似文献   

7.
本文构建了一个用来检验空间误差模型的稳健统计量,它以Bera和Yoon(1993)的理论为基础,渐进具有Cα检验统计量的良好大样本性质。这种检验方法不仅可以有效减小空间滞后效应对统计推断的影响,而且在很大程度上简化了计算。作为比较,我们发现当真实模型中存在显著的空间滞后效应时,Anselin等(1996)提出的检验统计量会产生明显的偏误,而我们的检验统计量依然有效。蒙特卡罗模拟的结果与本文的预期一致。  相似文献   

8.
研究目标:给出一种估计和检验排序模型中结构变化的方法。研究方法:在排序模型中引入平滑转换函数来描述结构变化,在此基础上构建拉格朗日乘子统计量检验模型中的结构变化,并使用极大似然方法估计模型。研究发现:拉格朗日乘子统计量具有标准的渐近卡方分布,并且该统计量对误差项的不同分布形式具有较好的稳健性;极大似然估计量具有一致性和渐近正态性;应用本文的方法分析居民收入和幸福关系,发现收入对幸福的影响存在显著的非线性结构变化特征,当收入增长超过社会收入分布的80%分位数时,收入对幸福的作用会减弱。研究创新:提出了一种新的并且是比较简便的估计和检验排序模型中结构变化的方法。研究价值:这种新的方法可以广泛应用于主观评价问题中的结构变化分析。  相似文献   

9.
银行存在排队问题,为解决这一问题,通过实地观测,获取某支行顾客排队数据。银行的服务过程视为随机系统,建立了基于排队论M/M/c/∞/∞/FCFS的银行服务系统模型,进而求解。通过对银行服务系统排队模型的实证研究,为银行合理优化服务系统提供决策参考。  相似文献   

10.
运用数理统计的知识,研究了水工结构可靠度功能函数正态分布在一个或者多个参数的统计特性未知情况下可靠指标的点估计值.同时对可靠指标估计值的无偏性以及一致性进行了一定程度的探讨.并对可靠指标点估计值运用进行了研究,以便说明该方法对解决实际工程问题的可行性及有效性.  相似文献   

11.
Since the work of Little and Rubin (1987) not substantial advances in the analysisof explanatory regression models for incomplete data with missing not at randomhave been achieved, mainly due to the difficulty of verifying the randomness ofthe unknown data. In practice, the analysis of nonrandom missing data is donewith techniques designed for datasets with random or completely random missingdata, as complete case analysis, mean imputation, regression imputation, maximumlikelihood or multiple imputation. However, the data conditions required to minimizethe bias derived from an incorrect analysis have not been fully determined. In thepresent work, several Monte Carlo simulations have been carried out to establishthe best strategy of analysis for random missing data applicable in datasets withnonrandom missing data. The factors involved in simulations are sample size,percentage of missing data, predictive power of the imputation model and existenceof interaction between predictors. The results show that the smallest bias is obtainedwith maximum likelihood and multiple imputation techniques, although with lowpercentages of missing data, absence of interaction and high predictive power ofthe imputation model (frequent data structures in research on child and adolescentpsychopathology) acceptable results are obtained with the simplest regression imputation.  相似文献   

12.
Felix Famoye  P. C. Consul 《Metrika》1995,42(1):127-138
The univariate generalized Poisson probability model has many applications in various areas such as engineering, manufacturing, survival analysis, genetic, shunting accidents, queuing, and branching processes. A correlated bivariate version of the univariate generalized Poisson distribution is defined and studied. Estimation of its parameters and some of its properties are also discussed.  相似文献   

13.
Abstract

It has been demonstrated recently that in small-to-medium samples the empirical significance levels of the asymptotic J-type tests for the SARAR model introduced by Kelejian (2008) can be controlled in many cases by the use of a bootstrap to construct a reference distribution. A feature of the popular GMM estimator in this context that deserves to receive more attention is that in small samples it will often deliver spatial parameter estimates that lie outside the invertibility region of the model. Using such illegitimate estimates to construct bootstrap samples is then problematic; the present paper finds that this practical obstacle may be removed by the use of quasi-maximum likelihood estimates that guarantee invertibility. The effects of different spatial weight patterns and sample size on the empirical significance levels and power of the tests are illustrated, and the paper demonstrates that estimation using QMLE, allied to a simple bootstrap, yields tests with reliable significance levels and reasonable power, in a majority of cases.

RÉSUMÉ dans des échantillons petits à moyens, il est possible, dans de nombreux cas, de contrôler les niveaux à signification empirique des tests asymptotiques introduits par Kelejian (2008) à l'aide d'un ‘bootstrap’. Dans ce contexte, une caractéristique de l'estimateur GMM, très répandu, est qu'il fournit, dans de petits échantillons, des estimations de paramètres spatiaux situés hors de la région d'inversibilité du modèle. L'emploi de telles estimations illégitimes pour la réalisation d’échantillons ‘bootstrap’ devient alors problématique; la présente communication indique que l'on peut supprimer cet obstacle pratique en utilisant le QMLE garantissant l'inversibilité. Les effets des tendances du poids spatial et la taille des échantillons sur les niveaux d'importance et la puissance sont illustrés, et la communication démontre que le QMLE, allié à un simple ‘bootstrap’, permet de réaliser des tests offrant, dans la plupart des vas, des niveaux d'importance fiables et une puissance raisonnable.

EXTRACTO En muestras entre pequeñas y medianas, los niveles de significancia empírica de las pruebas asintóticas de tipo J para el modelo SARAR introducidas por Kelejian (2008) pueden controlarse en muchos casos mediante el uso de un bootstrap. Una característica del popular estimador GMM dentro de este contexto es que en las muestras pequeñas, a menudo producirá estimaciones de parámetros espaciales que están fuera de la región de reversibilidad del modelo. No obstante, el empleo de este tipo de estimaciones ilegítimas para construir muestras bootstrap es problemático; el estudio actual muestra que este obstáculo práctico puede eliminarse mediante el uso del QMLE que garantiza la reversibilidad. Se ilustran los efectos de las pautas de peso espacial y del tamaño de la muestra sobre el poder y los niveles de significancia, y el estudio demuestra que el QMLE, aliado a un bootstrap simple, dota a las pruebas de niveles de significancia fiables y de un poder razonable, en la mayoría de los casos.

  相似文献   

14.
We compare five methods for parameter estimation of a Poisson regression model for clustered data: (1) ordinary (naive) Poisson regression (OP), which ignores intracluster correlation, (2) Poisson regression with fixed cluster‐specific intercepts (FI), (3) a generalized estimating equations (GEE) approach with an equi‐correlation matrix, (4) an exact generalized estimating equations (EGEE) approach with an exact covariance matrix, and (5) maximum likelihood (ML). Special attention is given to the simplest case of the Poisson regression with a cluster‐specific intercept random when the asymptotic covariance matrix is obtained in closed form. We prove that methods 1–5, except GEE, produce the same estimates of slope coefficients for balanced data (an equal number of observations in each cluster and the same vectors of covariates). All five methods lead to consistent estimates of slopes but have different efficiency for unbalanced data design. It is shown that the FI approach can be derived as a limiting case of maximum likelihood when the cluster variance increases to infinity. Exact asymptotic covariance matrices are derived for each method. In terms of asymptotic efficiency, the methods split into two groups: OP & GEE and EGEE & FI & ML. Thus, contrary to the existing practice, there is no advantage in using GEE because it is substantially outperformed by EGEE and FI. In particular, EGEE does not require integration and is easy to compute with the asymptotic variances of the slope estimates close to those of the ML.  相似文献   

15.
For estimatingp(⩾ 2) independent Poisson means, the paper considers a compromise between maximum likelihood and empirical Bayes estimators. Such compromise estimators enjoy both good componentwise as well as ensemble properties. Research supported by the NSF Grant Number MCS-8218091.  相似文献   

16.
ML–estimation of regression parameters with incomplete covariate information usually requires a distributional assumption regarding the concerned covariates that implies a source of misspecification. Semiparametric procedures avoid such assumptions at the expense of efficiency. In this paper a simulation study with small sample size is carried out to get an idea of the performance of the ML–estimator under misspecification and to compare it with the semiparametric procedures when the former is based on a correct assumption. The results show that there is only a little gain by correct parametric assumptions, which does not justify the possibly large bias when the assumptions are not met. Additionally, a simple modification of the complete case estimator appears to be nearly semiparametric efficient.  相似文献   

17.
Maximum likelihood estimates are obtained for long data sets of bivariate financial returns using mixing representation of the bivariate (skew) Variance Gamma (VG) and two (skew) t distributions. By analysing simulated and real data, issues such as asymptotic lower tail dependence and competitiveness of the three models are illustrated. A brief review of the properties of the models is included. The present paper is a companion to papers in this journal by Demarta & McNeil and Finlay & Seneta.  相似文献   

18.
This paper reviews methods for handling complex sampling schemes when analysing categorical survey data. It is generally assumed that the complex sampling scheme does not affect the specification of the parameters of interest, only the methodology for making inference about these parameters. The organisation of the paper is loosely chronological. Contingency table data are emphasised first before moving on to the analysis of unit‐level data. Weighted least squares methods, introduced in the mid 1970s along with methods for two‐way tables, receive early attention. They are followed by more general methods based on maximum likelihood, particularly pseudo maximum likelihood estimation. Point estimation methods typically involve the use of survey weights in some way. Variance estimation methods are described in broad terms. There is a particular emphasis on methods of testing. The main modelling methods considered are log‐linear models, logit models, generalised linear models and latent variable models. There is no coverage of multilevel models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号