首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The likelihood ratio (LR) is largely used to evaluate the relative weight of forensic data regarding two hypotheses, and for its assessment, Bayesian methods are widespread in the forensic field. However, the Bayesian ‘recipe’ for the LR presented in most of the literature consists of plugging‐in Bayesian estimates of the involved nuisance parameters into a frequentist‐defined LR: frequentist and Bayesian methods are thus mixed, giving rise to solutions obtained by hybrid reasoning. This paper provides the derivation of a proper Bayesian approach to assess LRs for the ‘rare type match problem’, the situation in which the expert wants to evaluate a match between the DNA profile of a suspect and that of a trace from the crime scene, and this profile has never been observed before in the database of reference. LR assessment using the two most popular Bayesian models (beta‐binomial and Dirichlet‐multinomial) is discussed and compared with corresponding plug‐in versions.  相似文献   

2.
Summary "Learning by experience" is a well-known part of the theory of subjective probabilities; the learning process is often derived from some prior distribution F(p ) where p is a parameter of unknown value of a binomial process for instance. In this paper, the learning process is explicitly formulated and the corresponding prior distribution is derived from it. In this interpretation, subjective probabilities are part of an inference methodology, rather than a subjective evaluation of frequentistic probabilities. Implications are considered for a concept like the "non-informative prior; the situation is considered in which the learning process seems to be in contact with some objectively determined prior.  相似文献   

3.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:2,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

4.
Capital allocation decisions are made on the basis of an assessment of creditworthiness. Default is a rare event for most segments of a bank's portfolio and data information can be minimal. Inference about default rates is essential for efficient capital allocation, for risk management and for compliance with the requirements of the Basel II rules on capital standards for banks. Expert information is crucial in inference about defaults. A Bayesian approach is proposed and illustrated using prior distributions assessed from industry experts. A maximum entropy approach is used to represent expert information. The binomial model, most common in applications, is extended to allow correlated defaults yet remain consistent with Basel II. The application shows that probabilistic information can be elicited from experts and econometric methods can be useful even when data information is sparse. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
In frequentist inference, we commonly use a single point (point estimator) or an interval (confidence interval/“interval estimator”) to estimate a parameter of interest. A very simple question is: Can we also use a distribution function (“distribution estimator”) to estimate a parameter of interest in frequentist inference in the style of a Bayesian posterior? The answer is affirmative, and confidence distribution is a natural choice of such a “distribution estimator”. The concept of a confidence distribution has a long history, and its interpretation has long been fused with fiducial inference. Historically, it has been misconstrued as a fiducial concept, and has not been fully developed in the frequentist framework. In recent years, confidence distribution has attracted a surge of renewed attention, and several developments have highlighted its promising potential as an effective inferential tool. This article reviews recent developments of confidence distributions, along with a modern definition and interpretation of the concept. It includes distributional inference based on confidence distributions and its extensions, optimality issues and their applications. Based on the new developments, the concept of a confidence distribution subsumes and unifies a wide range of examples, from regular parametric (fiducial distribution) examples to bootstrap distributions, significance (p‐value) functions, normalized likelihood functions, and, in some cases, Bayesian priors and posteriors. The discussion is entirely within the school of frequentist inference, with emphasis on applications providing useful statistical inference tools for problems where frequentist methods with good properties were previously unavailable or could not be easily obtained. Although it also draws attention to some of the differences and similarities among frequentist, fiducial and Bayesian approaches, the review is not intended to re‐open the philosophical debate that has lasted more than two hundred years. On the contrary, it is hoped that the article will help bridge the gaps between these different statistical procedures.  相似文献   

6.
The property of non-invariance to the scale of the data for the Abraham and Box (1978) Bayesian outlier model, and the model that generalizes the Guttman, Dutter and Freeman (1978) outlier analysis is discussed. This drawback is due to the non-informative prior taken for the parameters. Freeman (1980) expected that most posterior weight would be put on the model with the most outliers if the improper prior is used. We show that this may not be correct. An illustrative example is given.  相似文献   

7.
C. E. Särndal 《Metrika》1968,13(1):170-190
Summary This paper deals with a class of frequency distributions consisting of the negative hypergeometric distribution and its limit cases, namely, the negative binomial, binomial, Poisson, gamma, beta and normal distributions. In view of the relationships existing between the various members of this class, it is found convenient to discuss jointly for all the distributions certain topics, including additivity properties, a property related to the information measure in point estimation, as well as a comparison of frequency theory and Bayesian theory interpretations of interval estimation.  相似文献   

8.
In the Bayesian approach to model selection and hypothesis testing, the Bayes factor plays a central role. However, the Bayes factor is very sensitive to prior distributions of parameters. This is a problem especially in the presence of weak prior information on the parameters of the models. The most radical consequence of this fact is that the Bayes factor is undetermined when improper priors are used. Nonetheless, extending the non-informative approach of Bayesian analysis to model selection/testing procedures is important both from a theoretical and an applied viewpoint. The need to develop automatic and robust methods for model comparison has led to the introduction of several alternative Bayes factors. In this paper we review one of these methods: the fractional Bayes factor (O'Hagan, 1995). We discuss general properties of the method, such as consistency and coherence. Furthermore, in addition to the original, essentially asymptotic justifications of the fractional Bayes factor, we provide further finite-sample motivations for its use. Connections and comparisons to other automatic methods are discussed and several issues of robustness with respect to priors and data are considered. Finally, we focus on some open problems in the fractional Bayes factor approach, and outline some possible answers and directions for future research.  相似文献   

9.
The problem of comparing the frequentist evidence and the Bayesian evidence in the one‐sided testing problems has been widely treated and many researches revealed that these two methods can reach an agreement approximately. However, most of the previous work dealt mainly with situations without nuisance parameters. Since the presence of nuisance parameters is very common in practice, whether these two kinds of evidence still reach an agreement is a problem worthy of study. In this article, we establish in a systematic way under the exponential distributions the agreement of the Bayesian evidence and the generalized frequentist evidence (the generalized P‐value) for a variety of one‐sided testing problems where the nuisance parameters are involved.  相似文献   

10.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

11.
We propose a quasi-Bayesian nonparametric approach to estimating the structural relationship φφ among endogenous variables when instruments are available. We show that the posterior distribution of φφ is inconsistent in the frequentist sense. We interpret this fact as the ill-posedness of the Bayesian inverse problem defined by the relation that characterizes the structural function φφ. To solve this problem, we construct a regularized posterior distribution, based on a Tikhonov regularization of the inverse of the marginal variance of the sample, which is justified by a penalized projection argument. This regularized posterior distribution is consistent in the frequentist sense and its mean can be interpreted as the mean of the exact posterior distribution resulting from a Gaussian prior distribution with a shrinking covariance operator.  相似文献   

12.
We develop a novel Bayesian doubly adaptive elastic-net Lasso (DAELasso) approach for VAR shrinkage. DAELasso achieves variable selection and coefficient shrinkage in a data-based manner. It deals constructively with explanatory variables which tend to be highly collinear by encouraging the grouping effect. In addition, it also allows for different degrees of shrinkage for different coefficients. Rewriting the multivariate Laplace distribution as a scale mixture, we establish closed-form conditional posteriors that can be drawn from a Gibbs sampler. An empirical analysis shows that the forecast results produced by DAELasso and its variants are comparable to those from other popular Bayesian methods, which provides further evidence that the forecast performances of large and medium sized Bayesian VARs are relatively robust to prior choices, and, in practice, simple Minnesota types of priors can be more attractive than their complex and well-designed alternatives.  相似文献   

13.
Standard model‐based small area estimates perform poorly in presence of outliers. Sinha & Rao ( 2009 ) developed robust frequentist predictors of small area means. In this article, we present a robust Bayesian method to handle outliers in unit‐level data by extending the nested error regression model. We consider a finite mixture of normal distributions for the unit‐level error to model outliers and produce noninformative Bayes predictors of small area means. Our modelling approach generalises that of Datta & Ghosh ( 1991 ) under the normality assumption. Application of our method to a data set which is suspected to contain an outlier confirms this suspicion, correctly identifies the suspected outlier and produces robust predictors and posterior standard deviations of the small area means. Evaluation of several procedures including the M‐quantile method of Chambers & Tzavidis ( 2006 ) via simulations shows that our proposed method is as good as other procedures in terms of bias, variability and coverage probability of confidence and credible intervals when there are no outliers. In the presence of outliers, while our method and Sinha–Rao method perform similarly, they improve over the other methods. This superior performance of our procedure shows its dual (Bayes and frequentist) dominance, which should make it attractive to all practitioners, Bayesians and frequentists, of small area estimation.  相似文献   

14.
《Journal of econometrics》2004,123(2):307-325
This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments.  相似文献   

15.
This paper develops a Bayesian method for quantile regression for dichotomous response data. The frequentist approach to this type of regression has proven problematic in both optimizing the objective function and making inferences on the parameters. By accepting additional distributional assumptions on the error terms, the Bayesian method proposed sets the problem in a parametric framework in which these problems are avoided. To test the applicability of the method, we ran two Monte Carlo experiments and applied it to Horowitz's (1993) often studied work‐trip mode choice dataset. Compared to previous estimates for the latter dataset, the method proposed leads to a different economic interpretation. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
This paper considers the location‐scale quantile autoregression in which the location and scale parameters are subject to regime shifts. The regime changes in lower and upper tails are determined by the outcome of a latent, discrete‐state Markov process. The new method provides direct inference and estimate for different parts of a non‐stationary time series distribution. Bayesian inference for switching regimes within a quantile, via a three‐parameter asymmetric Laplace distribution, is adapted and designed for parameter estimation. Using the Bayesian output, the marginal likelihood is readily available for testing the presence and the number of regimes. The simulation study shows that the predictability of regimes and conditional quantiles by using asymmetric Laplace distribution as the likelihood is fairly comparable with the true model distributions. However, ignoring that autoregressive coefficients might be quantile dependent leads to substantial bias in both regime inference and quantile prediction. The potential of this new approach is illustrated in the empirical applications to the US inflation and real exchange rates for asymmetric dynamics and the S&P 500 index returns of different frequencies for financial market risk assessment.  相似文献   

17.
We consider the problem of component-wise estimation of ordered scale parameters of two gamma populations, when it is known apriori which population corresponds to each ordered parameter. Under the scale equivariant squared error loss function, smooth estimators that improve upon the best scale equivariant estimators are derived. These smooth estimators are shown to be generalized Bayes with respect to a non-informative prior. Finally, using Monte Carlo simulations, these improved smooth estimators are compared with the best scale equivariant estimators, their non-smooth improvements obtained in Vijayasree, Misra & Singh (1995), and the restricted maximum likelihood estimators. Acknowledgments. Authors are thankful to a referee for suggestions leading to improved presentation.  相似文献   

18.
《Journal of econometrics》2005,124(2):311-334
We introduce a set of new Markov chain Monte Carlo algorithms for Bayesian analysis of the multinomial probit model. Our Bayesian representation of the model places a new, and possibly improper, prior distribution directly on the identifiable parameters and thus is relatively easy to interpret and use. Our algorithms, which are based on the method of marginal data augmentation, involve only draws from standard distributions and dominate other available Bayesian methods in that they are as quick to converge as the fastest methods but with a more attractive prior specification. C-code along with an R interface for our algorithms is publicly available.1  相似文献   

19.
We propose two data-based priors for vector error correction models. Both priors lead to highly automatic approaches which require only minimal user input. For the first one, we propose a reduced rank prior which encourages shrinkage towards a low-rank, row-sparse, and column-sparse long-run matrix. For the second one, we propose the use of the horseshoe prior, which shrinks all elements of the long-run matrix towards zero. Two empirical investigations reveal that Bayesian vector error correction (BVEC) models equipped with our proposed priors scale well to higher dimensions and forecast well. In comparison to VARs in first differences, they are able to exploit the information in the level variables. This turns out to be relevant to improve the forecasts for some macroeconomic variables. A simulation study shows that the BVEC with data-based priors possesses good frequentist estimation properties.  相似文献   

20.
The use of control charts in statistical quality control, which are statistical measures of quality limits, is based on several assumptions. For instance, the process output distribution is assumed to follow a specified probability distribution (normal for continuous measurements and binomial or Poisson for attribute data) and the process supposed to be for large production runs. These assumptions are not always fulfilled in practice. This paper focuses on the problem when the process monitored has an output which has unknown distribution, or/and when the production run is short. The five-parameter generalized lambda distributions (GLD) which are subject to estimating data distributions, as a very flexible family of statistical distributions is presented and proposed as the base of control parameters estimation. The proposed chart is of the Shewhart type and simple equations are proposed for calculating the lower and upper control limits (LCL and UCL) for unknown distribution type of data. When the underlying distribution cannot be modeled sufficiently accurately, the presented control chart comes into the picture. We develop a computationally efficient method for accurate calculations of the control limits. As the vital measure of performance of SPC methods, we compute ARL’s and compare them to show the explicit excellence of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号