首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
I. Thomsen 《Metrika》1978,25(1):27-35
Summary The values of a variablex are assumed known for all elements in a finite population. Between this variable and another variableY, whose values are registered in a sample survey, there is the usual linear regression relationship. This paper considers problems of design and of estimation of the regression coefficienta and the interceptb. The followingGodambe type theorem is proved: There exists no minimum variance unbiased linear estimator ofa andb. We also derive that the usual estimators ofa andb have minimum variance if attention is restricted to the class of linear estimators unbiased in any given sample.  相似文献   

2.
《Socio》1986,20(3):131-133
Large-scale population surveys are a valuable source of data for epidemiologists and social scientists interested in retrospective studies of health conditions. It may not be apparent that such data sources present many problems, one of which is data reliability. It can be shown statistically that where a relation exists between two variables measured with no or minimum error, this relation vanishes if either or both variables have sizable error components. Thus such data have little utility in assessing simple relations, let alone in making causal inferences. Another problem is that unreliability or measurement error affects the sensitivity and specificity of a measuring instrument in unpredictable ways. Solutions to the problems caused by data unreliability are suggested.  相似文献   

3.
文章着重对目前供电可靠性所存在的问题进行统计分析,查找其存在原因、薄弱点,积极探索防范措施和解决对策,对进一步提高配电网管理水平具有重要的指导意义。  相似文献   

4.
A recent software review in this journal (McCullough, 1999 ) raises serious questions about EViews' performance on nonlinear least squares problems. We demonstrate that after correcting errors in the paper and adjusting convergence tolerances, EViews performance was comparable with the other benchmarked software. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

5.
A very well-known model in software reliability theory is that of Littlewood (1980). The (three) parameters in this model are usually estimated by means of the maximum likelihood method. The system of likelihood equations can have more than one solution. Only one of them will be consistent, however. In this paper we present a different, more analytical approach, exploiting the mathematical properties of the log-likelihood function itself. Our belief is that the ideas and methods developed in this paper could also be of interest for statisticians working on the estimation of the parameters of the generalised Pareto distribution. For those more generally interested in maximum likelihood the paper provides a 'practical case', indicating how complex matters may become when only three parameters are involved. Moreover, readers not familiar with counting process theory and software reliability are given a first introduction.  相似文献   

6.
本文主要就会计数据期望差距产生的原因与会计师的相关行为进行了解释,结论是遵循会计数据相关性和可靠性仍形成了“期望差距”的现实。潜在信息含量、会计准则的复杂难懂及经常性调整、企业混业经营使得对会计数据信息捕捉变得异常困难。注册会计师执业的独立性受损使得其所鉴证带有“期望差距”的数据很难被解释清楚。同时,审计报告也并不是对“期望差距”中是否存在错报漏报的担保。  相似文献   

7.
文章通过Excel和Excel合并工具的运用阐明了如何快速检查观测数据的可靠性。  相似文献   

8.
As a consequence of the well-known underidentification of the moving average model unless the parameter space is restricted, Maximum Likelihood and other estimators possess properties which can pose problems for estimation when a root of the process is close to the unit circle. The behaviour of the estimators is studied both through the analytic properties of their criterion functions and by Monte Carlo simulation. Conclusions about the choice of estimator are drawn, in particular regarding the treatment of the pre-sample residuals.  相似文献   

9.
《Journal of econometrics》2003,117(2):331-367
Often economic data are discretized or rounded to some extent. This paper proposes a regression and a density estimator that work especially well when discretization causes conventional kernel-based estimators to behave poorly. The estimator proposed here is a weighted average of neighboring frequency estimators, and the weights are composed of cubic B-splines. Interestingly, we show that this estimator can have both a smaller bias and variance than frequency estimators. As a means to obtain asymptotic normality and rates of convergence, we assume that the discreteness becomes finer as the sample size increases.  相似文献   

10.
AFT regression-adjusted monitoring of reliability data in cascade processes   总被引:1,自引:0,他引:1  
Today’s competitive market has witnessed a growing interest in improving the reliability of products in both service and industrial operations. A large number of monitoring schemes have been introduced to effectively control the reliability-related quality characteristics. These methods have focused on single-stage processes or considered quality variables which are independent. However, the main feature of multistage processes is the cascade property which needs to be justified for the sake of optimal process monitoring. The problem becomes complicated when the presence of censored observations is pronounced. Therefore, both the effects of influential covariates and censored data must be taken into account while presenting a monitoring scheme. In this paper, the accelerated failure time models are used and two regression-adjusted control schemes based on Cox-Snell residuals are devised. Two different scenarios with censored and non-censored data are considered respectively. The competing control charts are compared in terms of zero-state and steady-state average run length criteria using Markov chain approach. The comparison study reveals that the cumulative sum based monitoring procedure is superior and more effective. It should be noted that the application of the proposed monitoring schemes are not restricted to manufacturing processes and thus service operations such as healthcare systems can benefit from them.  相似文献   

11.
Regular business survey data are published as percentages of firms predicting higher, equal or lower values of some reference variable. Time series of such percentages do not fit production data too well. Univariate models often produce forecasts which are just as accurarate. Still, surveys contain anticipative judgement which, when combined with univariate modeling and proper filtering, may produce a good indicator for business cycle turning points. The way survey data are transformed so as to fit statistics on production seems not to be of much importance. A case study of the Finnish forest industry is offered as an example.  相似文献   

12.
We consider pseudo-panel data models constructed from repeated cross sections in which the number of individuals per group is large relative to the number of groups and time periods. First, we show that, when time-invariant group fixed effects are neglected, the OLS estimator does not converge in probability to a constant but rather to a random variable. Second, we show that, while the fixed-effects (FE) estimator is consistent, the usual t statistic is not asymptotically normally distributed, and we propose a new robust t statistic whose asymptotic distribution is standard normal. Third, we propose efficient GMM estimators using the orthogonality conditions implied by grouping and we provide t tests that are valid even in the presence of time-invariant group effects. Our Monte Carlo results show that the proposed GMM estimator is more precise than the FE estimator and that our new t test has good size and is powerful.  相似文献   

13.
We present a nonparametric study of current status data in the presence of death. Such data arise from biomedical investigations in which patients are examined for the onset of a certain disease, for example, tumor progression, but may die before the examination. A key difference between such studies on human subjects and the survival–sacrifice model in animal carcinogenicity experiments is that, due to ethical and perhaps technical reasons, deceased human subjects are not examined, so that the information on their disease status is lost. We show that, for current status data with death, only the overall and disease‐free survival functions can be identified, whereas the cumulative incidence of the disease is not identifiable. We describe a fast and stable algorithm to estimate the disease‐free survival function by maximizing a pseudo‐likelihood with plug‐in estimates for the overall survival rates. It is then proved that the global rate of convergence for the nonparametric maximum pseudo‐likelihood estimator is equal to Op(n?1/3) or the convergence rate of the estimated overall survival function, whichever is slower. Simulation studies show that the nonparametric maximum pseudo‐likelihood estimators are fairly accurate in small‐ to medium‐sized samples. Real data from breast cancer studies are analyzed as an illustration.  相似文献   

14.
目前,我国的岩土工程勘察中还存在着一些问题,这些问题严重影响着建筑企业的施工安全和经济效益。因此,文章主要探究岩土工程勘察中存在的问题,并在这些问题的基础上,提出一些解决对策。  相似文献   

15.
16.
Quality & Quantity - We used an internet-based survey platform to conduct a cross-sectional survey regarding the impact of COVID-19 on the LGBTQ?+?population in the United States....  相似文献   

17.
Recent interest in statistical inference for panel data has focused on the problem of unobservable, individual-specific, random effects and the inconsistencies they introduce in estimation when they are correlated with other exogenous variables. Analysis of this problem has always assumed the variance components to be known. In this paper, we re-examine some of these questions in finite samples when the variance components must be estimated. In particular, when the effects are uncorrelated with other explanatory variables, we show that (i) the feasible Gauss-Markov estimator is more efficient than the within groups estimator for all but the fewest degrees of freedom and its variance is never more than 17% above the Cramer-Rao bound, (ii) the asymptotic approximation to the variance of the feasible Gauss-Markov estimator is similarly within 17% of the true variance but remains significantly smaller for moderately large samples sizes, and (iii) more efficient estimators for the variance components do not necessarily yield more efficient feasible Gauss-Markov estimators.  相似文献   

18.
Micro-aggregation is a frequently used strategy to anonymize data before they are released to the scientific public. A sample of a continuous random variable is individually micro-aggregated by first sorting and grouping the data into groups of equal size and then replacing the values of the variable in each group by their group mean. In a similar way, data with more than one variable can be anonymized by individual micro-aggregation. Data thus distorted may still be used for statistical analysis. We show that if probabilities and quantiles are estimated in the usual way by computing relative frequencies and sample quantiles, respectively, these estimates are consistent and asymptotically normal under mild conditions.  相似文献   

19.
This paper presents a statistical analysis of time series regression models for longitudinal data with and without lagged dependent variables under a variety of assumptions about the initial conditions of the processes being analyzed. The analysis demonstrates how the asymptotic properties of estimators of longitudinal models are critically dependent on the manner in which samples become large: by expanding the number of observations per person, holding the number of people fixed, or by expanding the number of persons, holding the number of observations per person fixed. The paper demonstrates which parameters can and cannot be identified from data produced by different sampling plans.  相似文献   

20.
Incomplete data, due to missing observations or interval measurement of variables, usually cause parameters of interest in applications to be unidentified except under untestable and often controversial assumptions. However, it is often possible to identify sharp bounds on parameters without making untestable assumptions about the process through which data become incomplete. The bounds contain all logically possible values of the parameters and can be estimated consistently by replacing the population distribution of the data with the empirical distribution. This is straightforward in some circumstances but computationally burdensome in others. This paper describes the general problem and presents an empirical illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号