首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

2.
The problem of estimating a linear combination,μ, of means ofp-independent, first-order autoregressive models is considered. Sequential procedures are derived (i) to estimateμ pointwise using the linear combination of sample means, subject to a loss function (squared error plus cost per observation), and (ii) to arrive at a fixed-width confidence interval forμ. It is observed that in the case of point estimation we do not require a sampling scheme, where as in the case of interval estimation we do require a sampling scheme and a scheme similar to the one given in Mukhopadhyay and Liberman (1989) is proposed. All the first order efficiency properties of the sequential procedures involved here are derived. This paper is an extension of results of Sriram (1987) involving one time series to multiple time series. Research supported by AFOSR Grant number 89-0225.  相似文献   

3.
Standard jackknife confidence intervals for a quantile Q y (β) are usually preferred to confidence intervals based on analytical variance estimators due to their operational simplicity. However, the standard jackknife confidence intervals can give undesirable coverage probabilities for small samples sizes and large or small values of β. In this paper confidence intervals for a population quantile based on several existing estimators of a quantile are derived. These intervals are based on an approximation for the cumulative distribution function of a studentized quantile estimator. Confidence intervals are empirically evaluated by using real data and some applications are illustrated. Results derived from simulation studies show that proposed confidence intervals are narrower than confidence intervals based on the standard jackknife technique, which assumes normal approximation. Proposed confidence intervals also achieve coverage probabilities above to their nominal level. This study indicates that the proposed method can be an alternative to the asymptotic confidence intervals, which can be unknown in practice, and the standard jackknife confidence intervals, which can have poor coverage probabilities and give wider intervals.  相似文献   

4.
Empirical Bayes methods of estimating the local false discovery rate (LFDR) by maximum likelihood estimation (MLE), originally developed for large numbers of comparisons, are applied to a single comparison. Specifically, when assuming a lower bound on the mixing proportion of true null hypotheses, the LFDR MLE can yield reliable hypothesis tests and confidence intervals given as few as one comparison. Simulations indicate that constrained LFDR MLEs perform markedly better than conventional methods, both in testing and in confidence intervals, for high values of the mixing proportion, but not for low values. (A decision‐theoretic interpretation of the confidence distribution made those comparisons possible.) In conclusion, the constrained LFDR estimators and the resulting effect‐size interval estimates are not only effective multiple comparison procedures but also they might replace p‐values and confidence intervals more generally. The new methodology is illustrated with the analysis of proteomics data.  相似文献   

5.
L. Kuo  N. Mukhopadhyay 《Metrika》1990,37(1):291-300
Summary We havek independent normal populations with unknown meansμ 1, …,μ k and a common unknown varianceσ 2. Both point and interval estimation procedures for the largest mean are proposed by means of sequential and three-stage procedures. For the point estimation problem, we require that the maximal risk be at mostW, a preassigned positive number. For the other problem, we wish to construct a fixed-width confidence interval having the confidence coefficient at least 1-α, a preassigned number between zero and one. Asymptotic second order expansions are provided for various characteristics, such as average sample size, associated risks etc., for the suggested multi-stage estimation procedures.  相似文献   

6.
We study the problem of building confidence sets for ratios of parameters, from an identification robust perspective. In particular, we address the simultaneous confidence set estimation of a finite number of ratios. Results apply to a wide class of models suitable for estimation by consistent asymptotically normal procedures. Conventional methods (e.g. the delta method) derived by excluding the parameter discontinuity regions entailed by the ratio functions and which typically yield bounded confidence limits, break down even if the sample size is large ( Dufour, 1997). One solution to this problem, which we take in this paper, is to use variants of  Fieller’s ( 1940, 1954) method. By inverting a joint test that does not require identifying the ratios, Fieller-based confidence regions are formed for the full set of ratios. Simultaneous confidence sets for individual ratios are then derived by applying projection techniques, which allow for possibly unbounded outcomes. In this paper, we provide simple explicit closed-form analytical solutions for projection-based simultaneous confidence sets, in the case of linear transformations of ratios. Our solution further provides a formal proof for the expressions in Zerbe et al. (1982) pertaining to individual ratios. We apply the geometry of quadrics as introduced by  and , in a different although related context. The confidence sets so obtained are exact if the inverted test statistic admits a tractable exact distribution, for instance in the normal linear regression context. The proposed procedures are applied and assessed via illustrative Monte Carlo and empirical examples, with a focus on discrete choice models estimated by exact or simulation-based maximum likelihood. Our results underscore the superiority of Fieller-based methods.  相似文献   

7.
Dr. H. Vogt 《Metrika》1977,24(1):229-259
Summary A theorem ofTakács concerning interchangeable random variables is used to derive a simple method for the construction of confidence regions. Applying this method to a location parametera we get a.s. convergence of the confidence interval toa if the sample sizen increases while its probability is (n–1)/(n+1). Under certain conditions the interval contains always the maximum-likelihood estimate and another estimate which results from a least squares postulate. Lower bounds are given for the probability that our intervals become shorter than the intervals we would get relying on the central limit theorem. In order to avoid an assumption of finite support needed first to derive the a.s. convergence, we modify our method omitting extreme values.The modified intervals converge forn with probability 1 to the true parameter value under weaker conditions. A lower bound for the probability and-using a result due toRényi-theasymptotic probability of the modified interval is given. For the two kinds of intervals a formula concerning the velocity of their convergence to the length 0 is derived.Finally, the results are extended to a shift parameter in the two-sample case. Here we derive for equal sample sizes the exact probability of the modified interval and give upper and lower bounds fir its asymptotic probability. The method is practicable also if one sample size is an integer multiple of the other.  相似文献   

8.
This paper appliesa large number of models to three previously-analyzed data sets,and compares the point estimates and confidence intervals fortechnical efficiency levels. Classical procedures include multiplecomparisons with the best, based on the fixed effects estimates;a univariate version, marginal comparisons with the best; bootstrappingof the fixed effects estimates; and maximum likelihood givena distributional assumption. Bayesian procedures include a Bayesianversion of the fixed effects model, and various Bayesian modelswith informative priors for efficiencies. We find that fixedeffects models generally perform poorly; there is a large payoffto distributional assumptions for efficiencies. We do not findmuch difference between Bayesian and classical procedures, inthe sense that the classical MLE based on a distributional assumptionfor efficiencies gives results that are rather similar to a Bayesiananalysis with the corresponding prior.  相似文献   

9.
Na Li  Xingzhong Xu  Xuhua Liu 《Metrika》2011,74(3):409-438
Two hypothesis testing problems are considered in this paper to check the constancy of the coefficients in the varying-coefficient regression model. Tests for the two corresponding hypothesis testing problems are derived by two p-values. The proposed p-values can be thought as the generalized p-values, which are given by linear interpolation based on fiducial method. When all of the coefficients are constants, the p-value is uniformly distributed on interval (0, 1). Furthermore, the bound of the difference between the cumulative distribution function of the p-value and the uniform distribution on (0, 1) is given, which tends to 0 under some conditions. Meanwhile, the proposed tests are proved to be consistent under mild conditions. In addition, the proposed new method could be extended to include a broader range of hypotheses. Some good finite sample performances of the tests are investigated by simulations, in which a comparison with other test is given. Finally, a simple example based on real data is given to illustrate the application of our test, different result was obtained based on the proposed test.  相似文献   

10.
R. Magiera 《Metrika》1992,39(1):1-20
The problem of Bayes sequential estimation of the mean value parameter of continuous time processes with stationary independent increments having exponential-type likelihood functions is considered. Using a weighted square error loss and observing cost involving both a time cost and a state cost, the explicit solutions to the problem are derived. A discrete time approach is taken in which decisions are made at the end of time intervals having lengtht. The examples of optimal procedures in the case when the cost of observation includes a squared state cost are also given.  相似文献   

11.
Jie Mi 《Metrika》2010,71(3):353-359
Consider a family of distribution functions ${\{F(x, \theta),\,\theta \in \Theta\}}Consider a family of distribution functions {F(x, q), q ? Q}{\{F(x, \theta),\,\theta \in \Theta\}} . Suppose that there exists an estimator of the unknown parameter vector θ based on given data set. Then it is readily to obtain an estimator of any quantity given as an explicit function g(θ). Particularly, it is the case when the maximum likelihood estimator of θ is available. However, often some quantities of interest can not be expressed as an explicit function, rather it is determined as an implicit function of θ. The present article studies this problem. Sufficient conditions are given for deriving estimators of these quantities. The results are then applied to estimate change point of failure rate function, and change point of mean residual life function.  相似文献   

12.
Erhard Cramer  Udo Kamps 《Metrika》1997,46(1):93-121
Based on two independent samples from Weinman multivariate exponential distributions with unknown scale parameters, uniformly minimum variance unbiased estimators ofP(X<Y) are obtained for both, unknown and known common location parameter. The samples are permitted to be Type-II censored with possibly different numbers of observations. Since sampling from two-parameter exponential distributions is contained in the model as a particular case, known results for complete and censored samples are generalized. In the case of an unknown common location parameter with a certain restriction of the model, the UMVUE is shown to have a Gauss hypergeometric distribution, which is further examined. Moreover, explicit expressions for the variances of the estimators are derived and used to calculate the relative efficiency.  相似文献   

13.
M. Yaqub  A. H. Khan 《Metrika》1980,27(1):145-151
Summary The distribution of the square of the distance between a random point and a fixed point on ap-dimensional unit sphere when (i) the two points lie on the whole sphere and (ii) the two points lie in the positive quadrant, has been derived, assuming that the random point is distributed proportionally to exp (ky 1), wherek is a concentration parameter. Then-th order moment in both cases is also obtained.  相似文献   

14.
The problem of sequentially estimating an unknown distribution parameter of a particular exponential family of distributions is considered under LINEX loss function for estimation error and a cost c > 0 for each of an i.i.d. sequence of potential observations X 1, X 2, . . . A Bayesian approach is adopted and conjugate prior distributions are assumed. Asymptotically pointwise optimal and asymptotically optimal procedures are derived.  相似文献   

15.
In this paper we discuss a statistical method called multiple comparisons with the best, or MCB. Suppose that we have N populations, and population i has parameter value θi. Let $\theta _{(N)}={\rm max}_{i=1,\ldots ,N}\theta _{i}$\nopagenumbers\end , the parameter value for the ‘best’ population. Then MCB constructs joint confidence intervals for the differences $[\theta _{(N)}‐\theta _{1},\theta _{(N)}‐\theta _{2},\ldots ,\theta _{(N)}‐\theta _{N}]$\nopagenumbers\end . It is not assumed that it is known which population is best, and part of the problem is to say whether any population is so identified, at the given confidence level. This paper is meant to introduce MCB to economists. We discuss possible uses of MCB in economics. The application that we treat in most detail is the construction of confidence intervals for inefficiency measures from stochastic frontier models with panel data. We also consider an application to the analysis of labour market wage gaps. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

16.
Gábor Szűcs 《Metrika》2008,67(1):63-81
Statistical procedures based on the estimated empirical process are well known for testing goodness of fit to parametric distribution families. These methods usually are not distribution free, so that the asymptotic critical values of test statistics depend on unknown parameters. This difficulty may be overcome by the utilization of parametric bootstrap procedures. The aim of this paper is to prove a weak approximation theorem for the bootstrapped estimated empirical process under very general conditions, which allow both the most important continuous and discrete distribution families, along with most parameter estimation methods. The emphasis is on families of discrete distributions, and simulation results for families of negative binomial distributions are also presented.  相似文献   

17.
Strategies pushing firms to adopt plural forms and the heterogeneity of solutions they endorse have attracted increasing attention. This paper proposes a theoretical framework that combines asset specificity and uncertainty to explain why there are plural forms and focuses on the key role of uncertainty, within a given range of asset specificity, to predict what and when specific types of plural forms should be observed. Propositions derived from this model are tested on an extensive set of cases from the agribusiness sector. The empirical richness of these cases allows going beyond the existing literature, which has essentially focused on franchising.  相似文献   

18.
In epidemiology and clinical research, there is often a proportion of unexposed individuals resulting in zero values of exposure, meaning that some individuals are not exposed and those exposed have some continuous distribution. Examples are smoking or alcohol consumption. We will call these variables with a spike at zero (SAZ). In this paper, we performed a systematic investigation on how to model covariates with a SAZ and derived theoretical odds ratio functions for selected bivariate distributions. We consider the bivariate normal and bivariate log normal distribution with a SAZ. Both confounding and effect modification can be elegantly described by formalizing the covariance matrix given the binary outcome variable Y. To model the effect of these variables, we use a procedure based on fractional polynomials first introduced by Royston and Altman (1994, Applied Statistics 43: 429–467) and modified for the SAZ situation (Royston and Sauerbrei, 2008, Multivariable model‐building: a pragmatic approach to regression analysis based on fractional polynomials for modelling continuous variables, Wiley; Becher et al., 2012, Biometrical Journal 54: 686–700). We aim to contribute to theory, practical procedures and application in epidemiology and clinical research to derive multivariable models for variables with a SAZ. As an example, we use data from a case–control study on lung cancer.  相似文献   

19.
The preliminary test ridge regression estimators (PTRRE) based on the Wald (W), Likelihood Ratio (LR) and Lagrangian Multiplier (LM) tests for estimating the regression parameters has been considered in this paper. Here we consider the multiple regression model with student t error distribution. The bias and the mean square errors (MSE) of the proposed estimators are derived under both null and alternative hypothesis. By studying the MSE criterion, the regions of optimality of the estimators are determined. Under the null hypothesis, the PTRRE based on LM test has the smallest risk followed by the estimators based on LR and W tests. However, the PTRRE based on W test performs the best followed by the LR and LM based estimators when the parameter moves away from the subspace of the restrictions. The conditions of superiority of the proposed estimators for both shrinkage parameter, k and the departure parameter, are provided. Some tables for the maximum and minimum guaranteed efficiency of the proposed estimators have been given, which allows us to determine the optimum level of significance corresponding to the optimum estimator. Finally, we conclude that the estimator based on Wald test dominates the other two estimators in the sense of having highest minimum guaranteed efficiency.  相似文献   

20.
This paper examines whether indicators of consumer and business confidence can predict movements in GDP over the business cycle for four European economies. The empirical methodology used to investigate the properties of the data comprises cross‐correlation statistics, implementing an approach developed by den Haan [Journal of Monetary Economics (2000) , Vol. 46, pp. 3–30]. The predictive power of confidence indicators is also examined, investigating whether they can predict discrete events, namely economic downturns, and whether they can quantitatively forecast point estimates of economic activity. The results indicate that both consumer and business confidence indicators are procyclical and generally play a significant role in predicting downturns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号