首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
In this article, we propose a new identifiability condition by using the logarithmic calibration for the distortion measurement error models, where neither the response variable nor the covariates can be directly observed but are measured with multiplicative measurement errors. Under the logarithmic calibration, the direct-plug-in estimators of parameters and empirical likelihood based confidence intervals are proposed, and we studied the asymptotic properties of the proposed estimators. For the hypothesis testing of parameter, a restricted estimator under the null hypothesis and a test statistic are proposed. The asymptotic properties for the restricted estimator and test statistic are established. Simulation studies demonstrate the performance of the proposed procedure and a real example is analyzed to illustrate its practical usage.  相似文献   

2.
    
This paper considers linear regression models when neither the response variable nor the covariates can be directly observed, but are measured with multiplicative distortion measurement errors. We propose new identifiability conditions for the distortion functions via the varying coefficient models, then moment-based estimators of parameters in the model are proposed by using the estimated varying coefficient functions. This method does not require the independence condition between the confounding variables and the unobserved response and variables. We establish the connections among the varying coefficient based estimators, the conditional mean calibration and the conditional absolute mean calibration. We study the asymptotic results of these proposed estimators, and discuss their asymptotic efficiencies. Lastly, we make some comparisons among the proposed estimators through the simulation. These methods are applied to analyze a real dataset for an illustration.  相似文献   

3.
    
Probabilistic record linkage is the act of bringing together records that are believed to belong to the same unit (e.g., person or business) from two or more files. It is a common way to enhance dimensions such as time and breadth or depth of detail. Probabilistic record linkage is not an error-free process and link records that do not belong to the same unit. Naively treating such a linked file as if it is linked without errors can lead to biased inferences. This paper develops a method of making inference with estimating equations when records are linked using algorithms that are widely used in practice. Previous methods for dealing with this problem cannot accommodate such linking algorithms. This paper develops a parametric bootstrap approach to inference in which each bootstrap replicate involves applying the said linking algorithm. This paper demonstrates the effectiveness of the method in simulations and in real applications.  相似文献   

4.
    
Consider a linear regression model and suppose that our aim is to find a confidence interval for a specified linear combination of the regression parameters. In practice, it is common to perform a Durbin–Watson pretest of the null hypothesis of zero first‐order autocorrelation of the random errors against the alternative hypothesis of positive first‐order autocorrelation. If this null hypothesis is accepted then the confidence interval centered on the ordinary least squares estimator is used; otherwise the confidence interval centered on the feasible generalized least squares estimator is used. For any given design matrix and parameter of interest, we compare the confidence interval resulting from this two‐stage procedure and the confidence interval that is always centered on the feasible generalized least squares estimator, as follows. First, we compare the coverage probability functions of these confidence intervals. Second, we compute the scaled expected length of the confidence interval resulting from the two‐stage procedure, where the scaling is with respect to the expected length of the confidence interval centered on the feasible generalized least squares estimator, with the same minimum coverage probability. These comparisons are used to choose the better confidence interval, prior to any examination of the observed response vector.  相似文献   

5.
In this paper, firstly an introduction to the idea of record values for a sequence of independent and identically (Lomax or Pareto II) distributed random variables is given. Some of the distributional properties of these record values and moments up to second order are derived. These moments clearly depend on the location, scale and shape parameter of the Lomax distribution and two types of estlmators of these parameters, based on a series of observed record values are presented.  相似文献   

6.
This article considers the asymptotic estimation theory for the proportion in randomized response survey usinguncertain prior information (UPI) about the true proportion parameter which is assumed to be available on the basis of some sort of realistic conjecture. Three estimators, namely, the unrestricted estimator, the shrinkage restricted estimator and an estimator based on a preliminary test, are proposed. Their asymptotic mean squared errors are derived and compared. The relative dominance picture of the estimators is presented.  相似文献   

7.
The paper investigates the usefulness of bootstrap methods for small sample inference in cointegrating regression models. It discusses the standard bootstrap, the recursive bootstrap, the moving block bootstrap and the stationary bootstrap methods. Some guidelines for bootstrap data generation and test statistics to consider are provided and some simulation evidence presented suggests that the bootstrap methods, when properly implemented, can provide significant improvement over asymptotic inference.  相似文献   

8.
This paper deals with the estimation of P[Y < X] when X and Y are two independent generalized exponential distributions with different shape parameters but having the same scale parameters. The maximum likelihood estimator and its asymptotic distribution is obtained. The asymptotic distribution is used to construct an asymptotic confidence interval of P[Y < X]. Assuming that the common scale parameter is known, the maximum likelihood estimator, uniformly minimum variance unbiased estimator and Bayes estimator of P[Y < X] are obtained. Different confidence intervals are proposed. Monte Carlo simulations are performed to compare the different proposed methods. Analysis of a simulated data set has also been presented for illustrative purposes.Part of the work was supported by a grant from the Natural Sciences and Engineering Research Council  相似文献   

9.
For contingency tables with extensive missing data, the unrestricted MLE under the saturated model, computed by the EM algorithm, is generally unsatisfactory. In this case, it may be better to fit a simpler model by imposing some restrictions on the parameter space. Perlman and Wu (1999) propose lattice conditional independence (LCI) models for contingency tables with arbitrary missing data patterns. When this LCI model fits well, the restricted MLE under the LCI model is more accurate than the unrestricted MLE under the saturated model, but not in general. Here we propose certain empirical Bayes (EB) estimators that adaptively combine the best features of the restricted and unrestricted MLEs. These EB estimators appear to be especially useful when the observed data is sparse, even in cases where the suitability of the LCI model is uncertain. We also study a restricted EM algorithm (called the ER algorithm) with similar desirable features. Received: July 1999  相似文献   

10.
Interest in density forecasts (as opposed to solely modeling the conditional mean) arises from the possibility of dynamics in higher moments of a time series, as well as in forecasting the probability of future events in some applications. By combining the idea of Markov bootstrapping with that of kernel density estimation, this paper presents a simple non-parametric method for estimating out-of-sample multi-step density forecasts. The paper also considers a host of evaluation tests for examining the dynamic misspecification of estimated density forecasts by targeting autocorrelation, heteroskedasticity and neglected non-linearity. These tests are useful, as a rejection of the tests gives insight into ways to improve a particular forecasting model. In an extensive Monte Carlo analysis involving a range of commonly used linear and non-linear time series processes, the non-parametric method is shown to work reasonably well across the simulated models for a suitable choice of the bandwidth (smoothing parameter). Furthermore, an application of the method to the U.S. Industrial Production series provides multi-step density forecasts that show no sign of dynamic misspecification.  相似文献   

11.
陈江平  周亚东 《价值工程》2014,(31):314-315
在大地电磁资料的分析中,其中相当大的一部分问题都是因为近地表电性结构水平非均匀性导致二维或三维的区域构造中水平均匀的正常电磁场发生畸变,从而对MT阻抗张量产生影响[1]。故研究MT阻抗张量畸变和分解方法对于反演解释的正确性有重要的意义。  相似文献   

12.
13.
Shuangzhe Liu 《Metrika》2000,51(2):145-155
We first establish two matrix determinant Kantorovich-type inequalities. Then, based on these two and other inequalities, we introduce new efficiency criteria and present their upper bounds to make efficiency comparisons between the ordinary least squares estimator and the best linear unbiased estimator in the general linear model. We provide numerical examples to examine the upper bounds of some new and old efficiency criteria. Received: June 1999  相似文献   

14.
  总被引:1,自引:0,他引:1  
The general minimax estimator of the linear regression model is applicable when the whole parameter vector is restricted to an ellipsoid. In many applications, however, it is more realistic to assume that only a part of the parameter set is constrained. For this case an alternative minimax approach is developed.  相似文献   

15.
We consider the problem of component-wise estimation of ordered scale parameters of two gamma populations, when it is known apriori which population corresponds to each ordered parameter. Under the scale equivariant squared error loss function, smooth estimators that improve upon the best scale equivariant estimators are derived. These smooth estimators are shown to be generalized Bayes with respect to a non-informative prior. Finally, using Monte Carlo simulations, these improved smooth estimators are compared with the best scale equivariant estimators, their non-smooth improvements obtained in Vijayasree, Misra & Singh (1995), and the restricted maximum likelihood estimators. Acknowledgments. Authors are thankful to a referee for suggestions leading to improved presentation.  相似文献   

16.
    
We consider kernel smoothed Grenander‐type estimators for a monotone hazard rate and a monotone density in the presence of randomly right censored data. We show that they converge at rate n2/5 and that the limit distribution at a fixed point is Gaussian with explicitly given mean and variance. It is well known that standard kernel smoothing leads to inconsistency problems at the boundary points. It turns out that, also by using a boundary correction, we can only establish uniform consistency on intervals that stay away from the end point of the support (although we can go arbitrarily close to the right boundary).  相似文献   

17.
Unobservable variables in econometrics are represented in one of three ways: by variables contaminated by measurement errors, by proxy variables, or by various manifest indicators and/or causes. This paper contains a discussion of models involving each of these representations, and highlights certain interesting implications that have been insufficiently emphasized or completely unrecognized in the literature.  相似文献   

18.
会计计量误差及其对公允价值会计研究的启示   总被引:2,自引:0,他引:2  
在本次全球金融危机当中,公允价值会计计量饱受争议,使学术界对不同会计计量属性问题的讨论又一次升温.沿用Sunder的思想,可将会计计量值作为一个普通的计量经济学估计量,从而把各种会计计量属性统一到同一框架之下,借用均方误差的大小来衡量会计计量的质量;通过分解会计计量误差,再结合公允价值在估值过程中所依赖的三种不同层级输入变量在会计计量误差方面的差异,可得出在未来公允价值会计的研究设计中应当注意的一些有益启示.  相似文献   

19.
    
We consider nonlinear heteroscedastic single‐index models where the mean function is a parametric nonlinear model and the variance function depends on a single‐index structure. We develop an efficient estimation method for the parameters in the mean function by using the weighted least squares estimation, and we propose a “delete‐one‐component” estimator for the single‐index in the variance function based on absolute residuals. Asymptotic results of estimators are also investigated. The estimation methods for the error distribution based on the classical empirical distribution function and an empirical likelihood method are discussed. The empirical likelihood method allows for incorporation of the assumptions on the error distribution into the estimation. Simulations illustrate the results, and a real chemical data set is analyzed to demonstrate the performance of the proposed estimators.  相似文献   

20.
总体应计利润模型计量偏差分析   总被引:1,自引:0,他引:1  
黄梅 《财会通讯》2008,(11):91-94
总体应计利润法是计量盈余管理水平最常用的方法。本文对总体应计利润具体模型进行了全面梳理,对其计量偏差存在的证据与产生原因进行了详细分析,指出了产生计量偏差的主要原因。针对国内研究现状,提出开发更优的总体应计利润模型、修正现有模型以及扩大总体应计利润法运用范围的建议。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号