首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
For the slope parameter of the classical errors-in-variables model, existing interval estimations with finite length will have confidence level equal to zero because of the Gleser–Hwang effect. Especially when the reliability ratio is low and the sample size is small, the Gleser–Hwang effect is so serious that it leads to the very liberal coverages and the unacceptable lengths of the existing confidence intervals. In this paper, we obtain two new fiducial intervals for the slope. One is based on a fiducial generalized pivotal quantity and we prove that this interval has the correct asymptotic coverage. The other fiducial interval is based on the method of the generalized fiducial distribution. We also construct these two fiducial intervals for the other parameters of interest of the classical errors-in-variables model and introduce these intervals to a hybrid model. Then, we compare these two fiducial intervals with the existing intervals in terms of empirical coverage and average length. Simulation results show that the two proposed fiducial intervals have better frequency performance. Finally, we provide a real data example to illustrate our approaches.  相似文献   

2.
Consider a linear regression model and suppose that our aim is to find a confidence interval for a specified linear combination of the regression parameters. In practice, it is common to perform a Durbin–Watson pretest of the null hypothesis of zero first‐order autocorrelation of the random errors against the alternative hypothesis of positive first‐order autocorrelation. If this null hypothesis is accepted then the confidence interval centered on the ordinary least squares estimator is used; otherwise the confidence interval centered on the feasible generalized least squares estimator is used. For any given design matrix and parameter of interest, we compare the confidence interval resulting from this two‐stage procedure and the confidence interval that is always centered on the feasible generalized least squares estimator, as follows. First, we compare the coverage probability functions of these confidence intervals. Second, we compute the scaled expected length of the confidence interval resulting from the two‐stage procedure, where the scaling is with respect to the expected length of the confidence interval centered on the feasible generalized least squares estimator, with the same minimum coverage probability. These comparisons are used to choose the better confidence interval, prior to any examination of the observed response vector.  相似文献   

3.
Receiver operating characteristic curves are widely used as a measure of accuracy of diagnostic tests and can be summarised using the area under the receiver operating characteristic curve (AUC). Often, it is useful to construct a confidence interval for the AUC; however, because there are a number of different proposed methods to measure variance of the AUC, there are thus many different resulting methods for constructing these intervals. In this article, we compare different methods of constructing Wald‐type confidence interval in the presence of missing data where the missingness mechanism is ignorable. We find that constructing confidence intervals using multiple imputation based on logistic regression gives the most robust coverage probability and the choice of confidence interval method is less important. However, when missingness rate is less severe (e.g. less than 70%), we recommend using Newcombe's Wald method for constructing confidence intervals along with multiple imputation using predictive mean matching.  相似文献   

4.
Standard jackknife confidence intervals for a quantile Q y (β) are usually preferred to confidence intervals based on analytical variance estimators due to their operational simplicity. However, the standard jackknife confidence intervals can give undesirable coverage probabilities for small samples sizes and large or small values of β. In this paper confidence intervals for a population quantile based on several existing estimators of a quantile are derived. These intervals are based on an approximation for the cumulative distribution function of a studentized quantile estimator. Confidence intervals are empirically evaluated by using real data and some applications are illustrated. Results derived from simulation studies show that proposed confidence intervals are narrower than confidence intervals based on the standard jackknife technique, which assumes normal approximation. Proposed confidence intervals also achieve coverage probabilities above to their nominal level. This study indicates that the proposed method can be an alternative to the asymptotic confidence intervals, which can be unknown in practice, and the standard jackknife confidence intervals, which can have poor coverage probabilities and give wider intervals.  相似文献   

5.
We develop an iterative and efficient information-theoretic estimator for forecasting interval-valued data, and use our estimator to forecast the SP500 returns up to five days ahead using moving windows. Our forecasts are based on 13 years of data. We show that our estimator is superior to its competitors under all of the common criteria that are used to evaluate forecasts of interval data. Our approach differs from other methods that are used to forecast interval data in two major ways. First, rather than applying the more traditional methods that use only certain moments of the intervals in the estimation process, our estimator uses the complete sample information. Second, our method simultaneously selects the model (or models) and infers the model’s parameters. It is an iterative approach that imposes minimal structure and statistical assumptions.  相似文献   

6.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

7.
A nomogram for confidence intervals and exceedance probabilities.
In this paper two problems are considered regarding the probability β that an observation on a normally (μ, σ2)-distributed random variable exceeds a given value W:

If μ and σ2 are unknown, the two problems are as follows:
1)if Wis given, to determine a confidence interval for β and
2)if β is given, to determine a confidence interval for W.
For these two essentially equivalent problems graphs are given from which the confidence intervals can be determined. The graphs are given in terms of:

and are based on an approximation for the distribution of x¯ +k s .  相似文献   

8.
A short t of a one dimensional probability distribution is defined to be an interval which has at least probability t and minimal length. The length of a show and its obvious estimator are significant measures of scale of a distribution and the corresponding random sample, respectively. In this note a non-parametric asymptotic confidence interval for the length of the (uniqueness is assumed) short t is established in the random censorship from the right model. The estimator of the length of the short t is based on the product-limit (PL) estimator of the unknown distribution function. The proof of the result mainly follows from an appropriate combination of the Glivenko-Cantelli theorem and the functional central limit theorem for the PL estimator.  相似文献   

9.
In this paper, we study the asymptotic properties of simulation extrapolation (SIMEX) based variance estimation that was proposed by Wang et al. (J R Stat Soc Series B 71:425–445, 2009). We first investigate the asymptotic normality of the parameter estimator in general parametric variance function and the local linear estimator for nonparametric variance function when permutation SIMEX (PSIMEX) is used. The asymptotic optimal bandwidth selection with respect to approximate mean integrated squared error (AMISE) for nonparametric estimator is also studied. We finally discuss constructing confidence intervals/bands of the parameter/function of interest. Other than applying the asymptotic results so that normal approximation can be used, we recommend a nonparametric Monte Carlo algorithm to avoid estimating the asymptotic variance of estimator. Simulation studies are carried out for illustration.  相似文献   

10.
Salima El Kolei 《Metrika》2013,76(8):1031-1081
We study a new parametric approach for particular hidden stochastic models. This method is based on contrast minimization and deconvolution and can be applied, for example, for ecological and financial state space models. After proving consistency and asymptotic normality of the estimation leading to asymptotic confidence intervals, we provide a thorough numerical study, which compares most of the classical methods that are used in practice (Quasi-Maximum Likelihood estimator, Simulated Expectation Maximization Likelihood estimator and Bayesian estimators) to estimate the Stochastic Volatility model. We prove that our estimator clearly outperforms the Maximum Likelihood Estimator in term of computing time, but also most of the other methods. We also show that this contrast method is the most robust with respect to non Gaussianity of the error and also does not need any tuning parameter.  相似文献   

11.
It is well known that there is a large degree of uncertainty around Rogoff's consensus half‐life of the real exchange rate. To obtain a more efficient estimator, we develop a system method that combines the Taylor rule and a standard exchange rate model to estimate half‐lives. Further, we propose a median unbiased estimator for the system method based on the generalized method of moments with non‐parametric grid bootstrap confidence intervals. Applying the method to real exchange rates of 18 developed countries against the US dollar, we find that most half‐life estimates from the single equation method fall in the range of 3–5 years, with wide confidence intervals that extend to positive infinity. In contrast, the system method yields median‐unbiased estimates that are typically shorter than 1 year, with much sharper 95% confidence intervals. Our Monte Carlo simulation results are consistent with an interpretation of these results that the true half‐lives are short but long half‐life estimates from single‐equation methods are caused by the high degree of uncertainty of these methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
This paper deals with the estimation of P[Y < X] when X and Y are two independent generalized exponential distributions with different shape parameters but having the same scale parameters. The maximum likelihood estimator and its asymptotic distribution is obtained. The asymptotic distribution is used to construct an asymptotic confidence interval of P[Y < X]. Assuming that the common scale parameter is known, the maximum likelihood estimator, uniformly minimum variance unbiased estimator and Bayes estimator of P[Y < X] are obtained. Different confidence intervals are proposed. Monte Carlo simulations are performed to compare the different proposed methods. Analysis of a simulated data set has also been presented for illustrative purposes.Part of the work was supported by a grant from the Natural Sciences and Engineering Research Council  相似文献   

13.
In this paper, upon using the known expressions for the Best Linear Unbiased Estimators (BLUEs) of the location and scale parameters of the Laplace distribution based on a progressively Type-II right censored sample, we derive the exact moment generating function (MGF) of the linear combination of standard Laplace order statistics. By using this MGF, we obtain the exact density function of the linear combination. This density function is then utilized to develop exact marginal confidence intervals (CIs) for the location and scale parameters through some pivotal quantities. Next, we derive the exact density of the BLUEs-based quantile estimator and use it to develop exact CIs for the population quantile. A brief mention is made about the reliability and cumulative hazard functions and as to how exact CIs can be constructed for these functions based on BLUEs. A Monte Carlo simulation study is then carried out to evaluate the performance of the developed inferential results. Finally, an example is presented to illustrate the point and interval estimation methods developed here.  相似文献   

14.
A new method to derive confidence intervals for medians in a finite population is presented. This method uses multi-auxiliary information through a multivariate regression type estimator of the population distribution function. A simulation study based on four real populations shows its behaviour versus other known methods.  相似文献   

15.
We consider conditional moment models under semi-strong identification. Identification strength is directly defined through the conditional moments that flatten as the sample size increases. Our new minimum distance estimator is consistent, asymptotically normal, robust to semi-strong identification, and does not rely on the choice of a user-chosen parameter, such as the number of instruments or some smoothing parameter. Heteroskedasticity-robust inference is possible through Wald testing without prior knowledge of the identification pattern. Simulations show that our estimator is competitive with alternative estimators based on many instruments, being well-centered with better coverage rates for confidence intervals.  相似文献   

16.
Subsampling and the m out of n bootstrap have been suggested in the literature as methods for carrying out inference based on post-model selection estimators and shrinkage estimators. In this paper we consider a subsampling confidence interval (CI) that is based on an estimator that can be viewed either as a post-model selection estimator that employs a consistent model selection procedure or as a super-efficient estimator. We show that the subsampling CI (of nominal level 1−α for any α(0,1)) has asymptotic confidence size (defined to be the limit of finite-sample size) equal to zero in a very simple regular model. The same result holds for the m out of n bootstrap provided m2/n→0 and the observations are i.i.d. Similar zero-asymptotic-confidence-size results hold in more complicated models that are covered by the general results given in the paper and for super-efficient and shrinkage estimators that are not post-model selection estimators. Based on these results, subsampling and the m out of n bootstrap are not recommended for obtaining inference based on post-consistent model selection or shrinkage estimators.  相似文献   

17.
In this article, we propose a new identifiability condition by using the logarithmic calibration for the distortion measurement error models, where neither the response variable nor the covariates can be directly observed but are measured with multiplicative measurement errors. Under the logarithmic calibration, the direct-plug-in estimators of parameters and empirical likelihood based confidence intervals are proposed, and we studied the asymptotic properties of the proposed estimators. For the hypothesis testing of parameter, a restricted estimator under the null hypothesis and a test statistic are proposed. The asymptotic properties for the restricted estimator and test statistic are established. Simulation studies demonstrate the performance of the proposed procedure and a real example is analyzed to illustrate its practical usage.  相似文献   

18.
In forecasting a time series, one may be asked to communicate the likely distribution of the future actual value, often expressed as a confidence interval. Whilst the accuracy (calibration) of these intervals has dominated most studies to date, this paper is concerned with other possible characteristics of the intervals. It reports a study in which the prevalence and determinants of the symmetry of judgemental confidence intervals in time series forecasting was examined. Most prior work has assumed that this interval is symmetrically placed around the forecast. However, this study shows that people generally estimate asymmetric confidence intervals where the forecast is not the midpoint of the estimated interval. Many of these intervals are grossly asymmetric. Results indicate that the placement of the forecast in relation to the last actual value of a time series is a major determinant of the direction and size of the asymmetry.  相似文献   

19.
This paper deals with the estimation of the mean of a spatial population. Under a design‐based approach to inference, an estimator assisted by a penalized spline regression model is proposed and studied. Proof that the estimator is design‐consistent and has a normal limiting distribution is provided. A simulation study is carried out to investigate the performance of the new estimator and its variance estimator, in terms of relative bias, efficiency, and confidence interval coverage rate. The results show that gains in efficiency over standard estimators in classical sampling theory may be impressive.  相似文献   

20.
《Journal of econometrics》1987,36(3):369-376
This paper shows that the Cochrane-Orcutt transformation which deletes the initial observation of the AR(1) regression model with known autocorrelation is strictly less efficient than a weighted generalized least squares estimator which gives the initial observation less weight than the true model requires, and may be more or less efficient than an estimator which gives the initial observation more weight than required. It also shows that the estimator based on the Cochrane-Orcutt transformation is strictly less efficient than one based on the Prais-Winsten transformation, if the AR(1) process has a finite past. These results give further support to the conclusion that, whenever possible, the estimator based on Prais-Winsten transformation should be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号