首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 711 毫秒
1.
Misclassification is found in many of the variables used in social sciences and, in practice, tends to be ignored in statistical analyses, and this can lead to biased results. This paper shows how to correct for differential misclassification in multilevel models and illustrates the extent to which this changes fixed and random parameter estimates. Reliability studies on self-reported behaviour of pregnant women suggest that there may be differential misclassification related to smoking and, thus, to child exposure to smoke. Models are applied to the Millennium Cohort Study data. The response variable is the child cognitive development assessed by the British Ability Scales at 3 years of age and explanatory variables are child exposure to smoke and family income. The proposed method allows a correction for misclassification when the specificity and sensitivity are known, and the assessment of potential biases occurring in the multilevel model parameter estimates if a validation data sample is not available, which is often the case.  相似文献   

2.
Summary Applying the usual minimax criterion in finite sampling theory yields complicated solutions except the parameter space has certain invariance properties. A conditional minimax criterion is suggested. After a sample is selected it is reasonable to seek an estimator that has good properties (e.g. minimaxity) for that sample. Explicit solutions are given in the case where the parameter space is described by quadratic forms.  相似文献   

3.
Minimum Kolmogorov distance estimates of arbitrary parameters are considered. They are shown to be strongly consistent if the parameter space metric is topologically weaker than the metric induced by the Kolmogorov distance of distributions from the statistical model. If the parameter space metric can be locally uniformly upper-bounded by the induced metric then these estimates are shown to be consistent of ordern −1/2. Similar results are proved for minimum Kolmogorov distance estimates of densities from parametrized families where the consistency is considered in theL 1-norm. The presented conditions for the existence, consistency, and consistency of ordern −1/2 are much weaker than those established in the literature for estimates with similar properties. It is shown that these assumptions are satisfied e.g. by all location and scale models with parent distributions different from Dirac, and by all standard exponential models. Supported by the scientific exchange program between the Hungarian Academy of Sciences and the Royal Belgian Academy of Sciences, and by GACR grant 201/93/0232.  相似文献   

4.
This paper discusses a survey where some respondents were asked sensitive questions directly and others were asked the same questions using randomized response. The use of randomized response was a factor in a 2 × 2 factorial design and dice were used to perform the randomization. First, the paper shows that the perturbation due to the dice can be described using the concept of misclassification and known conditional misclassification probabilities. Second, the paper formulates the likelihood for loglinear models and shows that latent class software can be used to analyse the data. An example including a power analysis is discussed.  相似文献   

5.
Ploberger and Phillips (Econometrica, Vol. 71, pp. 627–673, 2003) proved a result that provides a bound on how close a fitted empirical model can get to the true model when the model is represented by a parameterized probability measure on a finite dimensional parameter space. The present note extends that result to cases where the parameter space is infinite dimensional. The results have implications for model choice in infinite dimensional problems and highlight some of the difficulties, including technical difficulties, presented by models of infinite dimension. Some implications for forecasting are considered and some applications are given, including the empirically relevant case of vector autoregression (VAR) models of infinite order.  相似文献   

6.
Using the likelihood depth, new consistent and robust tests for the parameters of the Weibull distribution are developed. Uncensored as well as type-I right-censored data are considered. Tests are given for the shape parameter and also the scale parameter of the Weibull distribution, where in each case the situation that the other parameter is known as well the situation that both parameter are unknown is examined. In simulation studies the behavior in finite sample size and in contaminated data is analyzed and the new method is compared to existing ones. Here it is shown that the new tests based on likelihood depth give quite good results compared to standard methods and are robust against contamination. They are also robust in right-censored data in contrast to existing methods like the method of medians.  相似文献   

7.
8.
Parametric mixture models are commonly used in applied work, especially empirical economics, where these models are often employed to learn for example about the proportions of various types in a given population. This paper examines the inference question on the proportions (mixing probability) in a simple mixture model in the presence of nuisance parameters when sample size is large. It is well known that likelihood inference in mixture models is complicated due to (1) lack of point identification, and (2) parameters (for example, mixing probabilities) whose true value may lie on the boundary of the parameter space. These issues cause the profiled likelihood ratio (PLR) statistic to admit asymptotic limits that differ discontinuously depending on how the true density of the data approaches the regions of singularities where there is lack of point identification. This lack of uniformity in the asymptotic distribution suggests that confidence intervals based on pointwise asymptotic approximations might lead to faulty inferences. This paper examines this problem in details in a finite mixture model and provides possible fixes based on the parametric bootstrap. We examine the performance of this parametric bootstrap in Monte Carlo experiments and apply it to data from Beauty Contest experiments. We also examine small sample inferences and projection methods.  相似文献   

9.
If misclassification occurs the standard binomial estimator is usually seriously biased. It is known that an improvement can be achieved by using more than one observer in classifying the sample elements. Here it will be investigated which number of observers is optimal given the total number of judgements that can be made. An adaptive estimator for the probability of interest is introduced which uses an estimator of this optimal number of observers, obtained without additional cost. Some simulation results are presented which suggest that the adaptive procedure performs quite well.  相似文献   

10.
Covariate Measurement Error in Quadratic Regression   总被引:3,自引:0,他引:3  
We consider quadratic regression models where the explanatory variable is measured with error. The effect of classical measurement error is to flatten the curvature of the estimated function. The effect on the observed turning point depends on the location of the true turning point relative to the population mean of the true predictor. Two methods for adjusting parameter estimates for the measurement error are compared. First, two versions of regression calibration estimation are considered. This approximates the model between the observed variables using the moments of the true explanatory variable given its surrogate measurement. For certain models an expanded regression calibration approximation is exact. The second approach uses moment-based methods which require no assumptions about the distribution of the covariates measured with error. The estimates are compared in a simulation study, and used to examine the sensitivity to measurement error in models relating income inequality to the level of economic development. The simulations indicate that the expanded regression calibration estimator dominates the other estimators when its distributional assumptions are satisfied. When they fail, a small-sample modification of the method-of-moments estimator performs best. Both estimators are sensitive to misspecification of the measurement error model.  相似文献   

11.
"A state-space model is developed which provides estimates of decrements in a dynamic environment. The model integrates the actual unfolding experience and a priori or Bayesian views of the rates. The estimates of present rates and predicted future rates are continually updated and associated standard errors have simple expressions. The model is described and applied in the context of mortality estimation but it should prove useful in other actuarial applications. The approach is particularly suitable for dynamic environments where data are scarce and updated parameter estimates are required on a regular basis. To illustrate the method it is used to monitor the unfolding mortality experience of the retired lives under an actual pension plan."  相似文献   

12.
Suppose we have a sample randomly drawn from one of a given number of distributions. We wish to select the distribution based on the optimal maximum likelihood procedure. In this note, various tight estimates are derived under general conditions for the probability of making the wrong selection. The estimates are also extended to the case of many exponential families, where the general conditions fail. Some of the estimates are illustrated by means of simulation. The practical use of the estimates is discussed.  相似文献   

13.
G. Heinrich  U. Jensen 《Metrika》1995,42(1):49-65
Bivariate lifetime distributions are considered which describe physically motivated dependencies like those proposed by Freund (1961) and Marshall and Olkin (1967a). Such distributions arise in reliability problems with two-component systems. Generalizations of some previous models are investigated and the maximum likelihood estimates for a combined bivariate exponential distribution are given. The case of dependent random censorship is considered in connection with two-component series systems. Some simulations show how censorship affects the parameter estimates.  相似文献   

14.
This paper deals with specification, prediction and length of interval between the observations in an ARMA model. An AR(1) model is found to be suitable for a specific monthly time series. From this series we construct two types of quarterly series and derive the corresponding ARMA models. The theoretical parameter values of the quarterly models, given the monthly model, are compared with the values found empirically when no monthly series exists. By using the variance of the predictor error, we assess the performance of all specifications in predicting up to one year ahead. We show that while the monthly model performs best in theory, the values computed directly from the estimates prove in our empirical example the quarterly models to be preferable in most cases where we are to predict more than one quarter ahead.  相似文献   

15.
A neglected aspect of the otherwise fairly well developed Bayesian analysis of cointegration is point estimation of the cointegration space. It is pointed out here that, due to the well known non-identification of the cointegration vectors, the parameter space is not Euclidean and the loss functions underlying the conventional Bayes estimators are therefore questionable. We present a Bayes estimator of the cointegration space which takes the curved geometry of the parameter space into account. This estimate has the interpretation of being the posterior mean cointegration space and is invariant to the order of the time series, a property not shared with many of the Bayes estimators in the cointegration literature. An overall measure of cointegration space uncertainty is also proposed. Australian interest rate data are used for illustration. A small simulation study shows that the new Bayes estimator compares favorably to the maximum likelihood estimator.  相似文献   

16.
Previous work on characterising the distribution of forecast errors in time series models by statistics such as the asymptotic mean square error has assumed that observations used in estimating parameters are statistically independent of those used to construct the forecasts themselves. This assumption is quite unrealistic in practical situations and the present paper is intended to tackle the question of how the statistical dependence between the parameter estimates and the final period observations used to generate forecasts affects the sampling distribution of the forecast errors. We concentrate on the first-order autoregression and, for this model, show that the conditional distribution of forecast errors given the final period observation is skewed towards the origin and that this skewness is accentuated in the majority of cases by the statistical dependence between the parameter estimates and the final period observation.  相似文献   

17.
This paper addresses the problem of data errors in discrete variables. When data errors occur, the observed variable is a misclassified version of the variable of interest, whose distribution is not identified. Inferential problems caused by data errors have been conceptualized through convolution and mixture models. This paper introduces the direct misclassification approach. The approach is based on the observation that in the presence of classification errors, the relation between the distribution of the ‘true’ but unobservable variable and its misclassified representation is given by a linear system of simultaneous equations, in which the coefficient matrix is the matrix of misclassification probabilities. Formalizing the problem in these terms allows one to incorporate any prior information into the analysis through sets of restrictions on the matrix of misclassification probabilities. Such information can have strong identifying power. The direct misclassification approach fully exploits it to derive identification regions for any real functional of the distribution of interest. A method for estimating the identification regions and construct their confidence sets is given, and illustrated with an empirical analysis of the distribution of pension plan types using data from the Health and Retirement Study.  相似文献   

18.
A large sample of twins was used to examine whether conventional estimates of the return to schooling in Sweden are biased because ability is omitted from the earnings–schooling relationship. Ignoring measurement error, the results indicate that omitting ability from the earnings–schooling relationship leads to estimates that are positively biased. However, reasonable estimates of the measurement-error-adjusted returns are both above and below the unadjusted estimates, showing that the results depend crucially on a parameter not known at this time. However, an estimate of the reliability ratio was obtained using two measures on educational attainment. With this estimate of the reliability ratio, the measurement-error-adjusted estimate of the return to schooling in the sample of identical twins indicates that there is at most a slight ability bias in the conventional estimates of the return to schooling. The fundamental assumption of this kind of study is that within-pair differences in educational attainment are randomly determined. This assumption was also tested, but no strong evidence to reject it was found.  相似文献   

19.
In practice, inventory decisions depend heavily on demand forecasts, but the literature typically assumes that demand distributions are known. This means that estimates are substituted directly for the unknown parameters, leading to insufficient safety stocks, stock-outs, low service, and high costs. We propose a framework for addressing this estimation uncertainty that is applicable to any inventory model, demand distribution, and parameter estimator. The estimation errors are modeled and a predictive lead time demand distribution obtained, which is then substituted into the inventory model. We illustrate this framework for several different demand models. When the estimates are based on ten observations, the relative savings are typically between 10% and 30% for mean-stationary demand. However, the savings are larger when the estimates are based on fewer observations, when backorders are costlier, or when the lead time is longer. In the presence of a trend, the savings are between 50% and 80% for several scenarios.  相似文献   

20.
Most genetic studies recruit high‐risk families, and the discoveries are based on non‐random selected groups. We must consider the consequences of this ascertainment process to apply the results of genetic research to the general population. In addition, in epidemiological studies, binary responses are often misclassified. We proposed a binary logistic regression model that provides a novel and flexible way to correct for misclassification in binary responses, taking into account the ascertainment issues. A hierarchical Bayesian analysis using Markov chain Monte Carlo method has been carried out to investigate the effect of covariates on disease status. The focus of this paper is to study the effect of classification errors and non‐random ascertainment on the estimates of the model parameters. An extensive simulation study indicated that the proposed model results in substantial improvement of the estimates. Two data sets have been revisited to illustrate the methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号