首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
李凤 《价值工程》2011,30(25):289-290
基于逐次定数截尾样本下,讨论了两参数Weibull分布的参数估计,得到了两参数的逆矩估计.并利用模拟方法与极大似然估计作比较,模拟结果表明逆矩估计优于极大似然估计。  相似文献   

2.
空间动态面板模型拟极大似然估计的渐近效率改进   总被引:2,自引:0,他引:2  
Lee和Yu(2008)研究了一类同时带个体与时间固定效应的空间动态面板模型的拟极大似然估计量的大样本性质.本文说明当扰动项非正态时,拟极大似然估计量的渐近效率可以被进一步提高.为此,我们构造了一组合待定矩阵且形式一般的矩条件以用来包含对数似然函数一阶条件的特殊形式.从无冗余矩条件的角度,选取最优待定矩阵得到了最佳广义矩估计量.本文证明了当扰动项正态分布时,最佳广义矩估计量和拟极大似然估计量渐近等价;当扰动项非正态分布时,广义矩估计量具有比拟极大似然估计量更高的渐近效率.Monte Carlo实验结果与本文的理论预期一致.  相似文献   

3.
微观计量分析中缺失数据的极大似然估计   总被引:3,自引:0,他引:3  
微观计量经济分析中常常遇到缺失数据,传统的处理方法是删除所要分析变量中的缺失数据,或用变量的均值替代缺失数据,这种方法经常造成样本有偏。极大似然估计方法可以有效地处理和估计缺失数据。本文首先介绍缺失数据的极大似然估计方法,然后对一实际调查数据中的缺失数据进行极大似然估计,并与传统处理方法的估计结果进行比较和评价。  相似文献   

4.
荆源 《价值工程》2011,30(27):23-24
基于逐步增加的Ⅱ型截尾模型,讨论了双参数指数分布的可靠性指标的估计。导出了形状参数、尺度参数及可靠度函数的极大似然估计(MLE)和Bayes估计,最后运用Monte Carlo方法对Bayes估计和极大似然估计的MSE,进行了模拟比较。  相似文献   

5.
荆源 《价值工程》2011,30(26):315-315
讨论了定数截尾样本下,指数分布环境因子的极大似然估计和区间估计,为研究估计的精度,运用随机模拟方法,对环境因子的置信区间的精度进行了讨论。  相似文献   

6.
为了更好地实证考察外商直接投资对物流产业发展的推动效应,基于2001-2010年我国省级面板数据,利用混合截面最小二乘法、固定效应模型、广义最小二乘法和极大似然估计方法,实证考察外商直接投资对我国地区物流产业发展的推动效应,结果发现:外商直接投资的物流产业推动效应弹性介于0.103 4-0.249之间,回归结果显著为正且稳健.  相似文献   

7.
有效价差的极大似然估计   总被引:1,自引:0,他引:1  
有效价差是刻画金融资产交易成本的一种重要度量。本文基于Roll的价格模型,利用对数价格极差分布的近似正态特征,提出了一种有效价差的近似极大似然估计,并通过数值模拟比较了这一新的估计与以往文献中提出的Roll的协方差估计、贝叶斯估计以及High-Low估计在各种不同状况下的精度。模拟的结果表明,无论是在连续交易的理想状态还是交易不连续且价格不能被完全观测到的非理想状态下,极大似然估计和High-Low估计的精度均高于协方差和贝叶斯估计;当波动率相对较小的时候,极大似然估计的精度优于High-Low估计;另外,在非理想情形下,极大似然估计要比High-Low估计更加稳健。  相似文献   

8.
张丽丽 《价值工程》2011,30(30):211-211
本文通过两个具体例题表明当连续型总体可能的取值范围不是(-∞,+∞)时,利用第一种似然函数的定义解决点估计问题,学生不仅能够很容易地掌握最大似然估计法,同时对样本来自总体且估计离不开样本这一统计思想加深了理解。  相似文献   

9.
极大似然估计是估计的另一种计算方法,最早由高斯先生提出,后来由英国的统计学家费歇先生进行了命名和定义。此方法得到了广泛的应用,当前实验室研究的重要课题内容是在极大似然原理的基础上,采取随机试验的方法对结果进行相关的数据统计和分析。文章利用极大似然这种估计方法,对煤层瓦斯吸附常数分布参数进行了研究。  相似文献   

10.
《价值工程》2017,(6):114-117
针对荧光检测技术中荧光光谱重叠引起荧光值偏差的问题,提出了一种基于偏差递推最小二乘算法辨识补偿矩阵的方法。首先,在多输入多输出系统(MIMO)下,利用单染色荧光光谱和多染色荧光光谱实际测量的荧光值,通过递推最小二乘法进行迭代运算,推导出参数估计值。然后,在其中引入一个修正项,补偿在采集荧光中过程噪声引起的误差。最后,计算出偏差补偿递推最小二乘法迭代的估计值。理论分析与仿真表明,该算法在参数误差估计中误差率小于1%,相对于递推最小二乘算法性能提高了50%。所用算法能够有效提高估计值精度,同时也能减小噪声产生的影响。  相似文献   

11.
A new bivariate generalized Poisson distribution   总被引:1,自引:0,他引:1  
In this paper, a new bivariate generalized Poisson distribution (GPD) that allows any type of correlation is defined and studied. The marginal distributions of the bivariate model are the univariate GPDs. The parameters of the bivariate distribution are estimated by using the moment and maximum likelihood methods. Some test statistics are discussed and one numerical data set is used to illustrate the applications of the bivariate model.  相似文献   

12.
A smooth and detailed distribution is fitted to coarsely grouped frequency data by a nonparametric approach, based on penalized maximum likelihood. The estimated distribution conserves mean and variance of the data. The numerical solution is described and a compact and simplified algorithm is given. The procedure is applied to two empirical datasets.  相似文献   

13.
A very well-known model in software reliability theory is that of Littlewood (1980). The (three) parameters in this model are usually estimated by means of the maximum likelihood method. The system of likelihood equations can have more than one solution. Only one of them will be consistent, however. In this paper we present a different, more analytical approach, exploiting the mathematical properties of the log-likelihood function itself. Our belief is that the ideas and methods developed in this paper could also be of interest for statisticians working on the estimation of the parameters of the generalised Pareto distribution. For those more generally interested in maximum likelihood the paper provides a 'practical case', indicating how complex matters may become when only three parameters are involved. Moreover, readers not familiar with counting process theory and software reliability are given a first introduction.  相似文献   

14.
We use extreme‐value theory to estimate the ultimate world records for the 100‐m running, for both men and women. For this aim we collected the fastest personal best times set between January 1991 and June 2008. Estimators of the extreme‐value index are based on a certain number of upper order statistics. To optimize this number of order statistics we minimize the asymptotic mean‐squared error of the moment estimator. Using the thus obtained estimate for the extreme‐value index, the right endpoint of the speed distribution is estimated. The corresponding time can be interpreted as the estimated ultimate world record: the best possible time that could be run in the near future. We find 9.51 seconds for the 100‐m men and 10.33 seconds for the women.  相似文献   

15.
Pooling of data is often carried out to protect privacy or to save cost, with the claimed advantage that it does not lead to much loss of efficiency. We argue that this does not give the complete picture as the estimation of different parameters is affected to different degrees by pooling. We establish a ladder of efficiency loss for estimating the mean, variance, skewness and kurtosis, and more generally multivariate joint cumulants, in powers of the pool size. The asymptotic efficiency of the pooled data non‐parametric/parametric maximum likelihood estimator relative to the corresponding unpooled data estimator is reduced by a factor equal to the pool size whenever the order of the cumulant to be estimated is increased by one. The implications of this result are demonstrated in case–control genetic association studies with interactions between genes. Our findings provide a guideline for the discriminate use of data pooling in practice and the assessment of its relative efficiency. As exact maximum likelihood estimates are difficult to obtain if the pool size is large, we address briefly how to obtain computationally efficient estimates from pooled data and suggest Gaussian estimation and non‐parametric maximum likelihood as two feasible methods.  相似文献   

16.
The limit distribution of the quasi-maximum likelihood estimator (QMLE) for parameters in the ARMA-GARCH model remains an open problem when the process has infinite 4th moment. We propose a self-weighted QMLE and show that it is consistent and asymptotically normal under only a fractional moment condition. Based on this estimator, the asymptotic normality of the local QMLE is established for the ARMA model with GARCH (finite variance) and IGARCH errors. Using the self-weighted and the local QMLEs, we construct Wald statistics for testing linear restrictions on the parameters, and their limiting distributions are given. In addition, we show that the tail index of the IGARCH process is always 2, which is independently of interest.  相似文献   

17.
We consider moment based estimation methods for estimating parameters of the negative binomial distribution that are almost as efficient as maximum likelihood estimation and far superior to the celebrated zero term method and the standard method of moments estimator. Maximum likelihood estimators are difficult to compute for dependent samples such as samples generated from the negative binomial first-order autoregressive integer-valued processes. The power method of estimation is suggested as an alternative to maximum likelihood estimation for such samples and a comparison is made of the asymptotic normalized variance between the power method, method of moments and zero term method estimators.  相似文献   

18.
The two-parameter Pareto distribution provides reasonably good fit to the distributions of income and property value, and explains many empirical phenomena. For the censored data, the two parameters are regularly estimated by the maximum likelihood estimator, which is complicated in computation process. This investigation proposes a weighted least square estimator to estimate the parameters. Such a method is comparatively concise and easy to perceive, and could be applied to either complete or truncated data. Simulation studies are conducted in this investigation to show the feasibility of the proposed method. This report will demonstrate that the weighted least square estimator gives better performance than unweighted least square estimators with simulation cases. We also illustrate that the weighted least square estimator is very close to maximum likelihood estimator with simulation studies.  相似文献   

19.
This paper uses Monte Carlo experimentation to investigate the finite sample properties of the maximum likelihood (ML) and corrected ordinary least squares (COLS) estimators of the half-normal stochastic frontier production function. Results indicate substantial bias in both ML and COLS when the percentage contribution of inefficiency in the composed error (denoted by *) is small, and also that ML should be used in preference to COLS because of large mean square error advantages when * is greater than 50%. The performance of a number of tests of the existence of technical inefficiency is also investigated. The Wald and likelihood ratio (LR) tests are shown to have incorrect size. A one-sided LR test and a test of the significance of the third moment of the OLS residuals are suggested as alternatives, and are shown to have correct size, with the one-sided LR test having the better power of the two.The author would like to thank Bill Griffiths, George Battese, Howard Doran, Bill Greene and two anonymous referees for valuable comments. Any errors which remain are those of the author.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号