首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

2.
In this paper, we develop a family of bivariate beta distributions that encapsulate both positive and negative correlations, and which can be of general interest for Bayesian inference. We then invoke a use of these bivariate distributions in two contexts. The first is diagnostic testing in medicine, threat detection and signal processing. The second is system survivability assessment, relevant to engineering reliability and to survival analysis in biomedicine. In diagnostic testing, one encounters two parameters that characterize the efficacy of the testing mechanism: test sensitivity and test specificity. These tend to be adversarial when their values are interpreted as utilities. In system survivability, the parameters of interest are the component reliabilities, whose values when interpreted as utilities tend to exhibit co‐operative (amiable) behavior. Besides probability modeling and Bayesian inference, this paper has a foundational import. Specifically, it advocates a conceptual change in how one may think about reliability and survival analysis. The philosophical writings of de Finetti, Kolmogorov, Popper and Savage, when brought to bear on these topics constitute the essence of this change. Its consequence is that we have at hand a defensible framework for invoking Bayesian inferential methods in diagnostics, reliability and survival analysis. Another consequence is a deeper appreciation of the judgment of independent lifetimes. Specifically, we make the important point that independent lifetimes entail at a minimum, a two‐stage hierarchical construction.  相似文献   

3.
This paper is a survey of estimation techniques for stationary and ergodic diffusion processes observed at discrete points in time. The reader is introduced to the following techniques: (i) estimating functions with special emphasis on martingale estimating functions and so-called simple estimating functions; (ii) analytical and numerical approximations of the likelihood function which can in principle be made arbitrarily accurate; (iii) Bayesian analysis and MCMC methods; and (iv) indirect inference and EMM which both introduce auxiliary (but wrong) models and correct for the implied bias by simulation.  相似文献   

4.
The Rule of Three, its Variants and Extensions   总被引:1,自引:0,他引:1  
The Rule of Three (R3) states that 3/ n  is an approximate 95% upper limit for the binomial parameter, when there are no events in  n  trials. This rule is based on the one-sided Clopper–Pearson exact limit, but it is shown that none of the other popular frequentist methods lead to it. It can be seen as a special case of a Bayesian R3, but it is shown that among common choices for a non-informative prior, only the Bayes–Laplace and Zellner priors conform with it. R3 has also incorrectly been extended to 3 being a "reasonable" upper limit for the number of events in a future experiment of the same (large) size, when, instead, it applies to the binomial mean. In Bayesian estimation, such a limit should follow from the posterior predictive distribution. This method seems to give more natural results than—though when based on the Bayes–Laplace prior technically converges with—the method of prediction limits, which indicates between 87.5% and 93.75% confidence for this extended R3. These results shed light on R3 in general, suggest an extended Rule of Four for a number of events, provide a unique comparison of Bayesian and frequentist limits, and support the choice of the Bayes–Laplace prior among non-informative contenders.  相似文献   

5.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

6.
Continuous-time stochastic volatility models are becoming an increasingly popular way to describe moderate and high-frequency financial data. Barndorff-Nielsen and Shephard (2001a) proposed a class of models where the volatility behaves according to an Ornstein–Uhlenbeck (OU) process, driven by a positive Lévy process without Gaussian component. These models introduce discontinuities, or jumps, into the volatility process. They also consider superpositions of such processes and we extend that to the inclusion of a jump component in the returns. In addition, we allow for leverage effects and we introduce separate risk pricing for the volatility components. We design and implement practically relevant inference methods for such models, within the Bayesian paradigm. The algorithm is based on Markov chain Monte Carlo (MCMC) methods and we use a series representation of Lévy processes. MCMC methods for such models are complicated by the fact that parameter changes will often induce a change in the distribution of the representation of the process and the associated problem of overconditioning. We avoid this problem by dependent thinning methods. An application to stock price data shows the models perform very well, even in the face of data with rapid changes, especially if a superposition of processes with different risk premiums and a leverage effect is used.  相似文献   

7.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

8.
针对传统B櫣hlmann-Straub信用模型不能有效地解决缺失数据信息处理问题,本文利用贝叶斯统计方法,构造了一类新的贝叶斯信用分析模型,引入基于吉布斯抽样的马尔科夫链蒙特卡洛方法进行数值计算,建立了一个索赔后验分层正态模型进行实证分析,证明模型的有效性。研究结果表明,基于MCMC的贝叶斯信用模型能够动态模拟模型参数的后验分布,提高模型估计的精度,对保险公司经验费率厘定方法的改进具有重要的现实意义。  相似文献   

9.
This paper studies an alternative quasi likelihood approach under possible model misspecification. We derive a filtered likelihood from a given quasi likelihood (QL), called a limited information quasi likelihood (LI-QL), that contains relevant but limited information on the data generation process. Our LI-QL approach, in one hand, extends robustness of the QL approach to inference problems for which the existing approach does not apply. Our study in this paper, on the other hand, builds a bridge between the classical and Bayesian approaches for statistical inference under possible model misspecification. We can establish a large sample correspondence between the classical QL approach and our LI-QL based Bayesian approach. An interesting finding is that the asymptotic distribution of an LI-QL based posterior and that of the corresponding quasi maximum likelihood estimator share the same “sandwich”-type second moment. Based on the LI-QL we can develop inference methods that are useful for practical applications under possible model misspecification. In particular, we can develop the Bayesian counterparts of classical QL methods that carry all the nice features of the latter studied in  White (1982). In addition, we can develop a Bayesian method for analyzing model specification based on an LI-QL.  相似文献   

10.
We propose a general class of models and a unified Bayesian inference methodology for flexibly estimating the density of a response variable conditional on a possibly high-dimensional set of covariates. Our model is a finite mixture of component models with covariate-dependent mixing weights. The component densities can belong to any parametric family, with each model parameter being a deterministic function of covariates through a link function. Our MCMC methodology allows for Bayesian variable selection among the covariates in the mixture components and in the mixing weights. The model’s parameterization and variable selection prior are chosen to prevent overfitting. We use simulated and real data sets to illustrate the methodology.  相似文献   

11.
An important issue in models of technical efficiency measurement concerns the temporal behaviour of inefficiency. Consideration of dynamic models is necessary but inference in such models is complicated. In this paper we propose a stochastic frontier model that allows for technical inefficiency effects and dynamic technical inefficiency, and use Bayesian inference procedures organized around data augmentation techniques to provide inferences. Also provided are firm‐specific efficiency measures. The new methods are applied to a panel of large US commercial banks over the period 1989–2000. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
The paper takes up Bayesian inference in time series models when essentially nothing is known about the distribution of the dependent variable given past realizations or other covariates. It proposes the use of kernel quasi likelihoods upon which formal inference can be based. Gibbs sampling with data augmentation is used to perform the computations related to numerical Bayesian analysis of the model. The method is illustrated with artificial and real data sets.  相似文献   

13.
This paper considers the problem of defining a time-dependent nonparametric prior for use in Bayesian nonparametric modelling of time series. A recursive construction allows the definition of priors whose marginals have a general stick-breaking form. The processes with Poisson-Dirichlet and Dirichlet process marginals are investigated in some detail. We develop a general conditional Markov Chain Monte Carlo (MCMC) method for inference in the wide subclass of these models where the parameters of the marginal stick-breaking process are nondecreasing sequences. We derive a generalised Pólya urn scheme type representation of the Dirichlet process construction, which allows us to develop a marginal MCMC method for this case. We apply the proposed methods to financial data to develop a semi-parametric stochastic volatility model with a time-varying nonparametric returns distribution. Finally, we present two examples concerning the analysis of regional GDP and its growth.  相似文献   

14.
本文假设单变量时序的新息服从标准的学生t分布,提出多元时变Copula-GARCH-t模型,利用蒙特卡洛马尔科夫链(MCMC)算法对模型参数进行贝叶斯统计推断,给出了多个资产组合风险VaR和CVaR的度量方法,并基于风险最小化原则确立了最佳的资产配置模型。实证分析表明,MCMC方法优于经典的IFM方法,能够充分捕捉到中美股市的时变相依结构及相关系数和尾部指数的动态特征。  相似文献   

15.
Zellner (1976) proposed a regression model in which the data vector (or the error vector) is represented as a realization from the multivariate Student t distribution. This model has attracted considerable attention because it seems to broaden the usual Gaussian assumption to allow for heavier-tailed error distributions. A number of results in the literature indicate that the standard inference procedures for the Gaussian model remain appropriate under the broader distributional assumption, leading to claims of robustness of the standard methods. We show that, although mathematically the two models are different, for purposes of statistical inference they are indistinguishable. The empirical implications of the multivariate t model are precisely the same as those of the Gaussian model. Hence the suggestion of a broader distributional representation of the data is spurious, and the claims of robustness are misleading. These conclusions are reached from both frequentist and Bayesian perspectives.  相似文献   

16.
We establish the inferential properties of the mean-difference estimator for the average treatment effect in randomised experiments where each unit in a population is randomised to one of two treatments and then units within treatment groups are randomly sampled. The properties of this estimator are well understood in the experimental design scenario where first units are randomly sampled and then treatment is randomly assigned but not for the aforementioned scenario where the sampling and treatment assignment stages are reversed. We find that the inferential properties of the mean-difference estimator under this experimental design scenario are identical to those under the more common sample-first-randomise-second design. This finding will bring some clarifications about sampling-based randomised designs for causal inference, particularly for settings where there is a finite super-population. Finally, we explore to what extent pre-treatment measurements can be used to improve upon the mean-difference estimator for this randomise-first-sample-second design. Unfortunately, we find that pre-treatment measurements are often unhelpful in improving the precision of average treatment effect estimators under this design, unless a large number of pre-treatment measurements that are highly associative with the post-treatment measurements can be obtained. We confirm these results using a simulation study based on a real experiment in nanomaterials.  相似文献   

17.
The widely claimed replicability crisis in science may lead to revised standards of significance. The customary frequentist confidence intervals, calibrated through hypothetical repetitions of the experiment that is supposed to have produced the data at hand, rely on a feeble concept of replicability. In particular, contradictory conclusions may be reached when a substantial enlargement of the study is undertaken. To redefine statistical confidence in such a way that inferential conclusions are non-contradictory, with large enough probability, under enlargements of the sample, we give a new reading of a proposal dating back to the 60s, namely, Robbins' confidence sequences. Directly bounding the probability of reaching, in the future, conclusions that contradict the current ones, Robbins' confidence sequences ensure a clear-cut form of replicability when inference is performed on accumulating data. Their main frequentist property is easy to understand and to prove. We show that Robbins' confidence sequences may be justified under various views of inference: they are likelihood-based, can incorporate prior information and obey the strong likelihood principle. They are easy to compute, even when inference is on a parameter of interest, especially using a closed form approximation from normal asymptotic theory.  相似文献   

18.
Statistical Inference in Nonparametric Frontier Models: The State of the Art   总被引:14,自引:8,他引:6  
Efficiency scores of firms are measured by their distance to an estimated production frontier. The economic literature proposes several nonparametric frontier estimators based on the idea of enveloping the data (FDH and DEA-type estimators). Many have claimed that FDH and DEA techniques are non-statistical, as opposed to econometric approaches where particular parametric expressions are posited to model the frontier. We can now define a statistical model allowing determination of the statistical properties of the nonparametric estimators in the multi-output and multi-input case. New results provide the asymptotic sampling distribution of the FDH estimator in a multivariate setting and of the DEA estimator in the bivariate case. Sampling distributions may also be approximated by bootstrap distributions in very general situations. Consequently, statistical inference based on DEA/FDH-type estimators is now possible. These techniques allow correction for the bias of the efficiency estimators and estimation of confidence intervals for the efficiency measures. This paper summarizes the results which are now available, and provides a brief guide to the existing literature. Emphasizing the role of hypotheses and inference, we show how the results can be used or adapted for practical purposes.  相似文献   

19.
This paper studies the semiparametric binary response model with interval data investigated by Manski and Tamer (2002). In this partially identified model, we propose a new estimator based on MT’s modified maximum score (MMS) method by introducing density weights to the objective function, which allows us to develop asymptotic properties of the proposed set estimator for inference. We show that the density-weighted MMS estimator converges at a nearly cube-root-n rate. We propose an asymptotically valid inference procedure for the identified region based on subsampling. Monte Carlo experiments provide supports to our inference procedure.  相似文献   

20.
In this paper, we study a Bayesian approach to flexible modeling of conditional distributions. The approach uses a flexible model for the joint distribution of the dependent and independent variables and then extracts the conditional distributions of interest from the estimated joint distribution. We use a finite mixture of multivariate normals (FMMN) to estimate the joint distribution. The conditional distributions can then be assessed analytically or through simulations. The discrete variables are handled through the use of latent variables. The estimation procedure employs an MCMC algorithm. We provide a characterization of the Kullback–Leibler closure of FMMN and show that the joint and conditional predictive densities implied by the FMMN model are consistent estimators for a large class of data generating processes with continuous and discrete observables. The method can be used as a robust regression model with discrete and continuous dependent and independent variables and as a Bayesian alternative to semi- and non-parametric models such as quantile and kernel regression. In experiments, the method compares favorably with classical nonparametric and alternative Bayesian methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号