首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Quantile estimation is important for a wide range of applications. While point estimates based on one or two order statistics are common, constructing confidence intervals around them, however, is a more difficult problem. This paper has two goals. First, it surveys the numerous distribution-free methods for constructing approximate confidence intervals for quantiles. These techniques can be divided roughly into four categories: using a pivotal quantity, resampling, interpolation, and empirical likelihood methods. Second, a method based on the pivotal quantity that has received limited attention in the past is extended. Comprehensive simulation studies are used to compare performance across methods. The proposed method is simple and performs similarly to linear interpolation methods and a smoothed empirical likelihood method. While the proposed method has slightly wider interval widths, it can be calculated for more extreme quantiles even when there are few observations.  相似文献   

2.
In this article, we construct two likelihood‐based confidence intervals (CIs) for a binomial proportion parameter using a double‐sampling scheme with misclassified binary data. We utilize an easy‐to‐implement closed‐form algorithm to obtain maximum likelihood estimators of the model parameters by maximizing the full‐likelihood function. The two CIs are a naïve Wald interval and a modified Wald interval. Using simulations, we assess and compare the coverage probabilities and average widths of our two CIs. Finally, we conclude that the modified Wald interval, unlike the naïve Wald interval, produces close‐to‐nominal CIs under various simulations and, thus, is preferred in practice. Utilizing the expressions derived, we also illustrate our two CIs for a binomial proportion parameter using real‐data example.  相似文献   

3.
In impulse response analysis estimation uncertainty is typically displayed by constructing bands around estimated impulse response functions. If they are based on the joint asymptotic distribution possibly constructed with bootstrap methods in a frequentist framework, often individual confidence intervals are simply connected to obtain the bands. Such bands are known to be too narrow and have a joint coverage probability lower than the desired one. If instead the Wald statistic is used and the joint bootstrap distribution of the impulse response coefficient estimators is taken into account and mapped into the band, it is shown that such a band is typically rather conservative. It is argued that, by using the Bonferroni method, a band can often be obtained which is smaller than the Wald band.  相似文献   

4.
Item nonresponse in survey data can pose significant problems for social scientists carrying out statistical modeling using a large number of explanatory variables. A number of imputation methods exist but many only deal with univariate imputation, or relatively simple cases of multivariate imputation, often assuming a monotone pattern of missingness. In this paper we evaluate a tree-based approach for multivariate imputation using real data from the 1970 British Cohort Study, known for its complex pattern of nonresponse. The performance of this tree-based approach is compared to mode imputation and a sequential regression based approach within a simulation study.  相似文献   

5.
The method of generalized confidence intervals is proposed as an alternative method for constructing confidence intervals for process capability indices under the one-way random model for balanced as well as unbalanced data. The generalized lower confidence limits and the coverage probabilities for three commonly used capability indices were studied via simulation, separately for balanced and unbalanced cases. Simulation results showed that the generalized confidence interval procedure is quite satisfactory both in the balanced and unbalanced cases. Examples are provided to illustrate the results.  相似文献   

6.
Multiple imputation methods properly account for the uncertainty of missing data. One of those methods for creating multiple imputations is predictive mean matching (PMM), a general purpose method. Little is known about the performance of PMM in imputing non‐normal semicontinuous data (skewed data with a point mass at a certain value and otherwise continuously distributed). We investigate the performance of PMM as well as dedicated methods for imputing semicontinuous data by performing simulation studies under univariate and multivariate missingness mechanisms. We also investigate the performance on real‐life datasets. We conclude that PMM performance is at least as good as the investigated dedicated methods for imputing semicontinuous data and, in contrast to other methods, is the only method that yields plausible imputations and preserves the original data distributions.  相似文献   

7.
In this article, we demonstrate by simulations that rich imputation models for incomplete longitudinal datasets produce more calibrated estimates in terms of reduced bias and higher coverage rates without duly deflating the efficiency. We argue that the use of supplementary variables that are thought to be potential causes or correlates of missingness or outcomes in the imputation process may lead to better inferential results in comparison to simpler imputation models. The liberal use of these variables is recommended as opposed to the conservative strategy.  相似文献   

8.
In this review paper, we discuss the theoretical background of multiple imputation, describe how to build an imputation model and how to create proper imputations. We also present the rules for making repeated imputation inferences. Three widely used multiple imputation methods, the propensity score method, the predictive model method and the Markov chain Monte Carlo (MCMC) method, are presented and discussed.  相似文献   

9.
This paper discusses the family of life distributions with failure rate functions which decrease initially until a change point and remain constant thereafter. The paper focuses on the estimation for the change point of the failure rate function. While point estimation of the change point of the failure rate function has been discussed by some authors, one can hardly find any existing work on the interval estimation of the change point. In this paper, a method for constructing approximate confidence intervals for the change point is proposed. The proposed approximate confidence intervals are based on the number of failed test items at or before a fixed inspection time. Received: September 1999  相似文献   

10.
This paper outlines a strategy to validate multiple imputation methods. Rubin's criteria for proper multiple imputation are the point of departure. We describe a simulation method that yields insight into various aspects of bias and efficiency of the imputation process. We propose a new method for creating incomplete data under a general Missing At Random (MAR) mechanism. Software implementing the validation strategy is available as a SAS/IML module. The method is applied to investigate the behavior of polytomous regression imputation for categorical data.  相似文献   

11.
The use of confidence intervals (CIs) is strongly recommended in the 5th edition of the Publication Manual of the American Psychological Association. The CI for the binomial parameter π is customarily obtained using Wald method, which uses the normal approximation to the binomial distribution and the estimated standard error. Wald CI has been shown to be unsatisfactory and alternative CIs have been proposed, but this literature appears to have gone unnoticed to psychologists. Only one of these alternatives is dual with the conventional Score test for π, thus meeting the requirements stated in the Publication Manual. Three examples illustrate the appropriate choice of a CI for π in the context of growing concern with good statistical practices.  相似文献   

12.
It is well known that there is a large degree of uncertainty around Rogoff's consensus half‐life of the real exchange rate. To obtain a more efficient estimator, we develop a system method that combines the Taylor rule and a standard exchange rate model to estimate half‐lives. Further, we propose a median unbiased estimator for the system method based on the generalized method of moments with non‐parametric grid bootstrap confidence intervals. Applying the method to real exchange rates of 18 developed countries against the US dollar, we find that most half‐life estimates from the single equation method fall in the range of 3–5 years, with wide confidence intervals that extend to positive infinity. In contrast, the system method yields median‐unbiased estimates that are typically shorter than 1 year, with much sharper 95% confidence intervals. Our Monte Carlo simulation results are consistent with an interpretation of these results that the true half‐lives are short but long half‐life estimates from single‐equation methods are caused by the high degree of uncertainty of these methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
For the slope parameter of the classical errors-in-variables model, existing interval estimations with finite length will have confidence level equal to zero because of the Gleser–Hwang effect. Especially when the reliability ratio is low and the sample size is small, the Gleser–Hwang effect is so serious that it leads to the very liberal coverages and the unacceptable lengths of the existing confidence intervals. In this paper, we obtain two new fiducial intervals for the slope. One is based on a fiducial generalized pivotal quantity and we prove that this interval has the correct asymptotic coverage. The other fiducial interval is based on the method of the generalized fiducial distribution. We also construct these two fiducial intervals for the other parameters of interest of the classical errors-in-variables model and introduce these intervals to a hybrid model. Then, we compare these two fiducial intervals with the existing intervals in terms of empirical coverage and average length. Simulation results show that the two proposed fiducial intervals have better frequency performance. Finally, we provide a real data example to illustrate our approaches.  相似文献   

14.
This paper considers the problem of constructing confidence sets for the date of a single break in a linear time series regression. We establish analytically and by small sample simulation that the current standard method in econometrics for constructing such confidence intervals has a coverage rate far below nominal levels when breaks are of moderate magnitude. Given that breaks of moderate magnitude are a theoretically and empirically relevant phenomenon, we proceed to develop an appropriate alternative. We suggest constructing confidence sets by inverting a sequence of tests. Each of the tests maintains a specific break date under the null hypothesis, and rejects when a break occurs elsewhere. By inverting a certain variant of a locally best invariant test, we ensure that the asymptotic critical value does not depend on the maintained break date. A valid confidence set can hence be obtained by assessing which of the sequence of test statistics exceeds a single number.  相似文献   

15.
By closely examining the examples provided in Nielsen (2003), this paper further explores the relationship between self-efficiency (Meng, 1994) and the validity of Rubin's multiple imputation (RMI) variance combining rule. The RMI variance combining rule is based on the common assumption/intuition that the efficiency of our estimators decreases when we have less data. However, there are estimation procedures that will do the opposite, that is, they can produce more efficient estimators with less data. Self-efficiency is a theoretical formulation for excluding such procedures. When a user, typically unaware of the hidden self-inefficiency of his choice, adopts a self-inefficient complete-data estimation procedure to conduct an RMI inference, the theoretical validity of his inference becomes a complex issue, as we demonstrate. We also propose a diagnostic tool for assessing potential self-inefficiency and the bias in the RMI variance estimator, at the outset of RMI inference, by constructing a convenient proxy to the RMI point estimator.  相似文献   

16.
Empirical Bayes methods of estimating the local false discovery rate (LFDR) by maximum likelihood estimation (MLE), originally developed for large numbers of comparisons, are applied to a single comparison. Specifically, when assuming a lower bound on the mixing proportion of true null hypotheses, the LFDR MLE can yield reliable hypothesis tests and confidence intervals given as few as one comparison. Simulations indicate that constrained LFDR MLEs perform markedly better than conventional methods, both in testing and in confidence intervals, for high values of the mixing proportion, but not for low values. (A decision‐theoretic interpretation of the confidence distribution made those comparisons possible.) In conclusion, the constrained LFDR estimators and the resulting effect‐size interval estimates are not only effective multiple comparison procedures but also they might replace p‐values and confidence intervals more generally. The new methodology is illustrated with the analysis of proteomics data.  相似文献   

17.
The paper deals with some new indices for ordinal data that arise from sample surveys. Their aim is to measure the degree of concentration to the “positive” or “negative” answers in a given question. The properties of these indices are examined. Moreover, methods for constructing confidence limits for the indices are discussed and their performance is evaluated through an extensive simulation study. Finally, the values of the indices defined and their confidence intervals are calculated for an example with real data. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

18.
ARCH and GARCH models are widely used to model financial market volatilities in risk management applications. Considering a GARCH model with heavy-tailed innovations, we characterize the limiting distribution of an estimator of the conditional value-at-risk (VaR), which corresponds to the extremal quantile of the conditional distribution of the GARCH process. We propose two methods, the normal approximation method and the data tilting method, for constructing confidence intervals for the conditional VaR estimator and assess their accuracies by simulation studies. Finally, we apply the proposed approach to an energy market data set.  相似文献   

19.
This paper provides a new approach to constructing confidence intervals for nonparametric drift and diffusion functions in the continuous-time diffusion model via empirical likelihood (EL). The log EL ratios are constructed through the estimating equations satisfied by the local linear estimators. Limit theories are developed by means of increasing time span and shrinking observational intervals. The results apply to both stationary and nonstationary recurrent diffusion processes. Simulations show that for both drift and diffusion functions, the new procedure performs remarkably well in finite samples and clearly dominates the conventional method in constructing confidence intervals based on asymptotic normality. An empirical example is provided to illustrate the usefulness of the proposed method.  相似文献   

20.
Several confidence intervals for the regression estimator are surveyed. A Monte Carlo experiment, based on the NETER and LOEBBECKE (1975) populations, gives estimated coverages and lengths of the different confidence intervals. One interval is exact under the assumption of multivariate normal distributions; it gives longer intervals (hence better coverages) than the interval based on a popular variance estimator. An interval due to ROBERTS (1970) is much too long. Jackknifing gives robust intervals. Rules of thumb for practitioners are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号