首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 281 毫秒
1.
文章考察经验似然方法在GARCH模型的应用,运用经验似然方法来构造服从卡方分布的经验似然比统计量,进而构造其置信区间,最后通过数值模拟来说明经验似然应用于GARCH模型的优良性。  相似文献   

2.
张丽丽 《价值工程》2011,30(30):211-211
本文通过两个具体例题表明当连续型总体可能的取值范围不是(-∞,+∞)时,利用第一种似然函数的定义解决点估计问题,学生不仅能够很容易地掌握最大似然估计法,同时对样本来自总体且估计离不开样本这一统计思想加深了理解。  相似文献   

3.
微观计量分析中缺失数据的极大似然估计   总被引:3,自引:0,他引:3  
微观计量经济分析中常常遇到缺失数据,传统的处理方法是删除所要分析变量中的缺失数据,或用变量的均值替代缺失数据,这种方法经常造成样本有偏。极大似然估计方法可以有效地处理和估计缺失数据。本文首先介绍缺失数据的极大似然估计方法,然后对一实际调查数据中的缺失数据进行极大似然估计,并与传统处理方法的估计结果进行比较和评价。  相似文献   

4.
空间动态面板模型拟极大似然估计的渐近效率改进   总被引:2,自引:0,他引:2  
Lee和Yu(2008)研究了一类同时带个体与时间固定效应的空间动态面板模型的拟极大似然估计量的大样本性质.本文说明当扰动项非正态时,拟极大似然估计量的渐近效率可以被进一步提高.为此,我们构造了一组合待定矩阵且形式一般的矩条件以用来包含对数似然函数一阶条件的特殊形式.从无冗余矩条件的角度,选取最优待定矩阵得到了最佳广义矩估计量.本文证明了当扰动项正态分布时,最佳广义矩估计量和拟极大似然估计量渐近等价;当扰动项非正态分布时,广义矩估计量具有比拟极大似然估计量更高的渐近效率.Monte Carlo实验结果与本文的理论预期一致.  相似文献   

5.
Logistic回归是计量经济学中应用最广的离散选择模型。当变量个数较多时,极大似然估计解释性较差,为此本文基于新的惩罚函数ArctanLASSO,给出Logistic回归的一种非凸惩罚似然估计进行参数估计和变量选取,并证明了估计量的n1/2相合性和Oracle性质。本文结合二阶近似处理、LLA方法和梯度下降法给出估计算法,并通过最小化BIC准则对正则化参数进行选取。模拟数据分析显示,当样本量较大时,该方法在参数估计和变量选取两个方面都优于传统的LASSO、SCAD和MCP方法,样本量较小时,该方法同样具有很大优势。实际数据分析表明,该方法很好地权衡了拟合程度和非零系数的选择,是最优的备选模型,具有重要的实际意义。  相似文献   

6.
在线性参数空间滞后模型中,解释变量的系数一般假设为固定常数,本文首先放松了这种假设,将解释变量的系数设定为某一变量的未知函数,提出一类全新的半参数变系数空间滞后模型;其次导出了该模型的截面极大似然估计,并证明了该估计的一致性;最后用蒙特卡洛数值模拟方法考察了该估计在小样本条件下的性质,数值模拟结果显示我们提出的估计方法在小样本条件下依然有优良的表现。  相似文献   

7.
有效价差的极大似然估计   总被引:1,自引:0,他引:1  
有效价差是刻画金融资产交易成本的一种重要度量。本文基于Roll的价格模型,利用对数价格极差分布的近似正态特征,提出了一种有效价差的近似极大似然估计,并通过数值模拟比较了这一新的估计与以往文献中提出的Roll的协方差估计、贝叶斯估计以及High-Low估计在各种不同状况下的精度。模拟的结果表明,无论是在连续交易的理想状态还是交易不连续且价格不能被完全观测到的非理想状态下,极大似然估计和High-Low估计的精度均高于协方差和贝叶斯估计;当波动率相对较小的时候,极大似然估计的精度优于High-Low估计;另外,在非理想情形下,极大似然估计要比High-Low估计更加稳健。  相似文献   

8.
极大似然估计是估计的另一种计算方法,最早由高斯先生提出,后来由英国的统计学家费歇先生进行了命名和定义。此方法得到了广泛的应用,当前实验室研究的重要课题内容是在极大似然原理的基础上,采取随机试验的方法对结果进行相关的数据统计和分析。文章利用极大似然这种估计方法,对煤层瓦斯吸附常数分布参数进行了研究。  相似文献   

9.
运用数理统计的知识,研究了水工结构可靠度功能函数正态分布在一个或者多个参数的统计特性未知情况下可靠指标的点估计值.同时对可靠指标估计值的无偏性以及一致性进行了一定程度的探讨.并对可靠指标点估计值运用进行了研究,以便说明该方法对解决实际工程问题的可行性及有效性.  相似文献   

10.
运用数理统计的知识,研究了水工结构可靠度功能函数正态分布在一个或者多个参数的统计特性未知情况下可靠指标的点估计值。同时对可靠指标估计值的无偏性以及一致性进行了一定程度的探讨。并对可靠指标点估计值运用进行了研究,以便说明该方法对解决实际工程问题的可行性及有效性。  相似文献   

11.
12.
According to the law of likelihood, statistical evidence is represented by likelihood functions and its strength measured by likelihood ratios. This point of view has led to a likelihood paradigm for interpreting statistical evidence, which carefully distinguishes evidence about a parameter from error probabilities and personal belief. Like other paradigms of statistics, the likelihood paradigm faces challenges when data are observed incompletely, due to non-response or censoring, for instance. Standard methods to generate likelihood functions in such circumstances generally require assumptions about the mechanism that governs the incomplete observation of data, assumptions that usually rely on external information and cannot be validated with the observed data. Without reliable external information, the use of untestable assumptions driven by convenience could potentially compromise the interpretability of the resulting likelihood as an objective representation of the observed evidence. This paper proposes a profile likelihood approach for representing and interpreting statistical evidence with incomplete data without imposing untestable assumptions. The proposed approach is based on partial identification and is illustrated with several statistical problems involving missing data or censored data. Numerical examples based on real data are presented to demonstrate the feasibility of the approach.  相似文献   

13.
There is by now a long tradition of using the EM algorithm to find maximum‐likelihood estimates (MLEs) when the data are incomplete in any of a wide range of ways, even when the observed‐data likelihood can easily be evaluated and numerical maximisation of that likelihood is available as a conceptually simple route to the MLEs. It is rare in the literature to see numerical maximisation employed if EM is possible. But with excellent general‐purpose numerical optimisers now available free, there is no longer any reason, as a matter of course, to avoid direct numerical maximisation of likelihood. In this tutorial, I present seven examples of models in which numerical maximisation of likelihood appears to have some advantages over the use of EM as a route to MLEs. The mathematical and coding effort is minimal, as there is no need to derive and code the E and M steps, only a likelihood evaluator. In all the examples, the unconstrained optimiser nlm available in R is used, and transformations are used to impose constraints on parameters. I suggest therefore that the following question be asked of proposed new applications of EM: Can the MLEs be found more simply and directly by using a general‐purpose numerical optimiser?  相似文献   

14.
The widely claimed replicability crisis in science may lead to revised standards of significance. The customary frequentist confidence intervals, calibrated through hypothetical repetitions of the experiment that is supposed to have produced the data at hand, rely on a feeble concept of replicability. In particular, contradictory conclusions may be reached when a substantial enlargement of the study is undertaken. To redefine statistical confidence in such a way that inferential conclusions are non-contradictory, with large enough probability, under enlargements of the sample, we give a new reading of a proposal dating back to the 60s, namely, Robbins' confidence sequences. Directly bounding the probability of reaching, in the future, conclusions that contradict the current ones, Robbins' confidence sequences ensure a clear-cut form of replicability when inference is performed on accumulating data. Their main frequentist property is easy to understand and to prove. We show that Robbins' confidence sequences may be justified under various views of inference: they are likelihood-based, can incorporate prior information and obey the strong likelihood principle. They are easy to compute, even when inference is on a parameter of interest, especially using a closed form approximation from normal asymptotic theory.  相似文献   

15.
Two types of probability are discussed, one of which is additive whilst the other is non-additive. Popular theories that attempt to justify the importance of the additivity of probability are then critically reviewed. By making assumptions the two types of probability put forward are utilised to justify a method of inference which involves betting preferences being revised in light of the data. This method of inference can be viewed as a justification for a weighted likelihood approach to inference where the plausibility of different values of a parameter θ based on the data is measured by the quantity q (θ) = l ( , θ) w (θ), where l ( , θ) is the likelihood function and w (θ) is a weight function. Even though, unlike Bayesian inference, the method has the disadvantageous property that the measure q (θ) is generally non-additive, it is argued that the method has other properties which may be considered very desirable and which have the potential to imply that when everything is taken into account, the method is a serious alternative to the Bayesian approach in many situations. The methodology that is developed is applied to both a toy example and a real example.  相似文献   

16.
When two surveys carried out separately in the same population have common variables, it might be desirable to adjust each survey's weights so that they give equal estimates for the common variables. This problem has been studied extensively and has often been referred to as alignment or numerical consistency. We develop a design-based empirical likelihood approach for alignment and estimation of complex parameters defined by estimating equations. We focus on a general case when a single set of adjusted weights, which can be applied to both common and non-common variables, is produced for each survey. The main contribution of the paper is to show that the impirical log-likelihood ratio statistic is pivotal in the presence of alignment constraints. This pivotal statistic can be used to test hypotheses and derive confidence regions. Hence, the empirical likelihood approach proposed for alignment possesses the self-normalisation property, under a design-based approach. The proposed approach accommodates large sampling fractions, stratification and population level auxiliary information. It is particularly well suited for inference about small domains, when data are skewed. It includes implicit adjustments when the samples considerably differ in size. The confidence regions are constructed without the need for variance estimates, joint-inclusion probabilities, linearisation and re-sampling.  相似文献   

17.
This paper studies the Bahadur efficiency of empirical likelihood for testing moment condition models. It is shown that under mild regularity conditions, the empirical likelihood overidentifying restriction test is Bahadur efficient, i.e., its pp-value attains the fastest convergence rate under each fixed alternative hypothesis. Analogous results are derived for parameter hypothesis testing and set inference problems.  相似文献   

18.
This paper considers two empirical likelihood-based estimation, inference, and specification testing methods for quantile regression models. First, we apply the method of conditional empirical likelihood (CEL) by Kitamura et al. [2004. Empirical likelihood-based inference in conditional moment restriction models. Econometrica 72, 1667–1714] and Zhang and Gijbels [2003. Sieve empirical likelihood and extensions of the generalized least squares. Scandinavian Journal of Statistics 30, 1–24] to quantile regression models. Second, to avoid practical problems of the CEL method induced by the discontinuity in parameters of CEL, we propose a smoothed counterpart of CEL, called smoothed conditional empirical likelihood (SCEL). We derive asymptotic properties of the CEL and SCEL estimators, parameter hypothesis tests, and model specification tests. Important features are (i) the CEL and SCEL estimators are asymptotically efficient and do not require preliminary weight estimation; (ii) by inverting the CEL and SCEL ratio parameter hypothesis tests, asymptotically valid confidence intervals can be obtained without estimating the asymptotic variances of the estimators; and (iii) in contrast to CEL, the SCEL method can be implemented by some standard Newton-type optimization. Simulation results demonstrate that the SCEL method in particular compares favorably with existing alternatives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号