首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 560 毫秒
1.
In Bayesian analysis of vector autoregressive models, and especially in forecasting applications, the Minnesota prior of Litterman is frequently used. In many cases other prior distributions provide better forecasts and are preferable from a theoretical standpoint. Several of these priors require numerical methods in order to evaluate the posterior distribution. Different ways of implementing Monte Carlo integration are considered. It is found that Gibbs sampling performs as well as, or better, then importance sampling and that the Gibbs sampling algorithms are less adversely affected by model size. We also report on the forecasting performance of the different prior distributions. © 1997 by John Wiley & Sons, Ltd.  相似文献   

2.
Markov Chain Monte Carlo (MCMC) methods are used to sample from complicated multivariate distributions with normalizing constants that may not be computable in practice and from which direct sampling is not feasible. A fundamental problem is to determine convergence of the chains. Propp & Wilson (1996) devised a Markov chain algorithm called Coupling From The Past (CFTP) that solves this problem, as it produces exact samples from the target distribution and determines automatically how long it needs to run. Exact sampling by CFTP and other methods is currently a thriving research topic. This paper gives a review of some of these ideas, with emphasis on the CFTP algorithm. The concepts of coupling and monotone CFTP are introduced, and results on the running time of the algorithm presented. The interruptible method of Fill (1998) and the method of Murdoch & Green (1998) for exact sampling for continuous distributions are presented. Novel simulation experiments are reported for exact sampling from the Ising model in the setting of Bayesian image restoration, and the results are compared to standard MCMC. The results show that CFTP works at least as well as standard MCMC, with convergence monitored by the method of Raftery & Lewis (1992, 1996).  相似文献   

3.
Social and economic studies are often implemented as complex survey designs. For example, multistage, unequal probability sampling designs utilised by federal statistical agencies are typically constructed to maximise the efficiency of the target domain level estimator (e.g. indexed by geographic area) within cost constraints for survey administration. Such designs may induce dependence between the sampled units; for example, with employment of a sampling step that selects geographically indexed clusters of units. A sampling‐weighted pseudo‐posterior distribution may be used to estimate the population model on the observed sample. The dependence induced between coclustered units inflates the scale of the resulting pseudo‐posterior covariance matrix that has been shown to induce under coverage of the credibility sets. By bridging results across Bayesian model misspecification and survey sampling, we demonstrate that the scale and shape of the asymptotic distributions are different between each of the pseudo‐maximum likelihood estimate (MLE), the pseudo‐posterior and the MLE under simple random sampling. Through insights from survey‐sampling variance estimation and recent advances in computational methods, we devise a correction applied as a simple and fast postprocessing step to Markov chain Monte Carlo draws of the pseudo‐posterior distribution. This adjustment projects the pseudo‐posterior covariance matrix such that the nominal coverage is approximately achieved. We make an application to the National Survey on Drug Use and Health as a motivating example and we demonstrate the efficacy of our scale and shape projection procedure on synthetic data on several common archetypes of survey designs.  相似文献   

4.
We propose a natural conjugate prior for the instrumental variables regression model. The prior is a natural conjugate one since the marginal prior and posterior of the structural parameter have the same functional expressions which directly reveal the update from prior to posterior. The Jeffreys prior results from a specific setting of the prior parameters and results in a marginal posterior of the structural parameter that has an identical functional form as the sampling density of the limited information maximum likelihood estimator. We construct informative priors for the Angrist–Krueger [1991. Does compulsory school attendance affect schooling and earnings? Quarterly Journal of Economics 106, 979–1014] data and show that the marginal posterior of the return on education in the US coincides with the marginal posterior from the Southern region when we use the Jeffreys prior. This result occurs since the instruments are the strongest in the Southern region and the posterior using the Jeffreys prior, identical to maximum likelihood, focusses on the strongest available instruments. We construct informative priors for the other regions that make their posteriors of the return on education similar to that of the US and the Southern region. These priors show the amount of prior information needed to obtain comparable results for all regions.  相似文献   

5.
Inferences about unobserved random variables, such as future observations, random effects and latent variables, are of interest. In this paper, to make probability statements about unobserved random variables without assuming priors on fixed parameters, we propose the use of the confidence distribution for fixed parameters. We focus on their interval estimators and related probability statements. In random‐effect models, intervals can be formed either for future (yet‐to‐be‐realised) random effects or for realised values of random effects. The consistency of intervals for these two cases requires different regularity conditions. Via numerical studies, their finite sampling properties are investigated.  相似文献   

6.
We describe a method for estimating the marginal likelihood, based on Chib (1995) and C hib and Jeliazkov (2001) , when simulation from the posterior distribution of the model parameters is by the accept–reject Metropolis–Hastings (ARMH) algorithm. The method is developed for one-block and multiple-block ARMH algorithms and does not require the (typically) unknown normalizing constant of the proposal density. The problem of calculating the numerical standard error of the estimates is also considered and a procedure based on batch means is developed. Two examples, dealing with a multinomial logit model and a Gaussian regression model with non-conjugate priors, are provided to illustrate the efficiency and applicability of the method.  相似文献   

7.
Bayesian and empirical Bayesian estimation methods are reviewed and proposed for the row and column parameters in two-way Contingency tables without interaction. Rasch's multiplicative Poisson model for misreadings is discussed in an example. The case is treated where assumptions of exchangeability are reasonable a priori for the unknown parameters. Two different types of prior distributions are compared, It appears that gamma priors yield more tractable results than lognormal priors.  相似文献   

8.
During the last years, graphical models have become a popular tool to represent dependencies among variables in many scientific areas. Typically, the objective is to discover dependence relationships that can be represented through a directed acyclic graph (DAG). The set of all conditional independencies encoded by a DAG determines its Markov property. In general, DAGs encoding the same conditional independencies are not distinguishable from observational data and can be collected into equivalence classes, each one represented by a chain graph called essential graph (EG). However, both the DAG and EG space grow super exponentially in the number of variables, and so, graph structural learning requires the adoption of Markov chain Monte Carlo (MCMC) techniques. In this paper, we review some recent results on Bayesian model selection of Gaussian DAG models under a unified framework. These results are based on closed-form expressions for the marginal likelihood of a DAG and EG structure, which is obtained from a few suitable assumptions on the prior for model parameters. We then introduce a general MCMC scheme that can be adopted both for model selection of DAGs and EGs together with a couple of applications on real data sets.  相似文献   

9.
Bayesian hypothesis testing in latent variable models   总被引:1,自引:0,他引:1  
Hypothesis testing using Bayes factors (BFs) is known not to be well defined under the improper prior. In the context of latent variable models, an additional problem with BFs is that they are difficult to compute. In this paper, a new Bayesian method, based on the decision theory and the EM algorithm, is introduced to test a point hypothesis in latent variable models. The new statistic is a by-product of the Bayesian MCMC output and, hence, easy to compute. It is shown that the new statistic is appropriately defined under improper priors because the method employs a continuous loss function. In addition, it is easy to interpret. The method is illustrated using a one-factor asset pricing model and a stochastic volatility model with jumps.  相似文献   

10.
Contaminated or corrupted data typically require strong assumptions to identify parameters of interest. However, weaker assumptions often identify bounds on these parameters. This paper addresses when covariate data—variables in addition to the one of interest—tighten these bounds. First, we construct the identification region for the distribution of the variable of interest. This region demonstrates that covariate data are useless without knowledge about the distribution of erroneous data conditional on the covariates. Then, we develop bounds both on probabilities and on parameters of this distribution that respect stochastic dominance.  相似文献   

11.
A classic statistical problem is the optimal construction of sampling plans to accept or reject a lot based on a small sample. We propose a new asymptotically optimal solution for acceptance sampling by variables setting where we allow for an arbitrary unknown underlying distribution. In the course of this, we assume that additional sampling information is available, which is often the case in real applications. That information is given by additional measurements which may be affected by a calibration error. Our results show that, first, the proposed decision rule is asymptotically valid under fairly general assumptions. Secondly, the estimated optimal sample size is asymptotically normal. Furthermore, we illustrate our method by a real data analysis and investigate to some extent its finite-sample properties and the sharpness of our assumptions by simulations.  相似文献   

12.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

13.
This paper presents a Bayesian analysis of an ordered probit model with endogenous selection. The model can be applied when analyzing ordered outcomes that depend on endogenous covariates that are discrete choice indicators modeled by a multinomial probit model. The model is illustrated by analyzing the effects of different types of medical insurance plans on the level of hospital utilization, allowing for potential endogeneity of insurance status. The estimation is performed using the Markov chain Monte Carlo (MCMC) methods to approximate the posterior distribution of the parameters in the model.  相似文献   

14.
This paper considers the problem of defining a time-dependent nonparametric prior for use in Bayesian nonparametric modelling of time series. A recursive construction allows the definition of priors whose marginals have a general stick-breaking form. The processes with Poisson-Dirichlet and Dirichlet process marginals are investigated in some detail. We develop a general conditional Markov Chain Monte Carlo (MCMC) method for inference in the wide subclass of these models where the parameters of the marginal stick-breaking process are nondecreasing sequences. We derive a generalised Pólya urn scheme type representation of the Dirichlet process construction, which allows us to develop a marginal MCMC method for this case. We apply the proposed methods to financial data to develop a semi-parametric stochastic volatility model with a time-varying nonparametric returns distribution. Finally, we present two examples concerning the analysis of regional GDP and its growth.  相似文献   

15.
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.  相似文献   

16.
This paper presents the Bayesian analysis of a general multivariate exponential smoothing model that allows us to forecast time series jointly, subject to correlated random disturbances. The general multivariate model, which can be formulated as a seemingly unrelated regression model, includes the previously studied homogeneous multivariate Holt-Winters’ model as a special case when all of the univariate series share a common structure. MCMC simulation techniques are required in order to approach the non-analytically tractable posterior distribution of the model parameters. The predictive distribution is then estimated using Monte Carlo integration. A Bayesian model selection criterion is introduced into the forecasting scheme for selecting the most adequate multivariate model for describing the behaviour of the time series under study. The forecasting performance of this procedure is tested using some real examples.  相似文献   

17.
Large Bayesian VARs with stochastic volatility are increasingly used in empirical macroeconomics. The key to making these highly parameterized VARs useful is the use of shrinkage priors. We develop a family of priors that captures the best features of two prominent classes of shrinkage priors: adaptive hierarchical priors and Minnesota priors. Like adaptive hierarchical priors, these new priors ensure that only ‘small’ coefficients are strongly shrunk to zero, while ‘large’ coefficients remain intact. At the same time, these new priors can also incorporate many useful features of the Minnesota priors such as cross-variable shrinkage and shrinking coefficients on higher lags more aggressively. We introduce a fast posterior sampler to estimate BVARs with this family of priors—for a BVAR with 25 variables and 4 lags, obtaining 10,000 posterior draws takes about 3 min on a standard desktop computer. In a forecasting exercise, we show that these new priors outperform both adaptive hierarchical priors and Minnesota priors.  相似文献   

18.
A tutorial derivation of the reversible jump Markov chain Monte Carlo (MCMC) algorithm is given. Various examples illustrate how reversible jump MCMC is a general framework for Metropolis-Hastings algorithms where the proposal and the target distribution may have densities on spaces of varying dimension. It is finally discussed how reversible jump MCMC can be applied in genetics to compute the posterior distribution of the number, locations, effects, and genotypes of putative quantitative trait loci.  相似文献   

19.
We describe exact inference based on group-invariance assumptions that specify various forms of symmetry in the distribution of a disturbance vector in a general nonlinear model. It is shown that such mild assumptions can be equivalently formulated in terms of exact confidence sets for the parameters of the functional form. When applied to the linear model, this exact inference provides a unified approach to a variety of parametric and distribution-free tests. In particular, we consider exact instrumental variable inference, based on symmetry assumptions. The unboundedness of exact confidence sets is related to the power to reject a hypothesis of underidentification. In a multivariate instrumental variables context, generalizations of Anderson–Rubin confidence sets are considered.  相似文献   

20.
In this paper we compare classical econometrics, calibration and Bayesian inference in the context of the empirical analysis of factor demands. Our application is based on a popular flexible functional form for the firm's cost function, namely Diewert's Generalized Leontief function, and uses the well-known Berndt and Wood 1947–1971 KLEM data on the US manufacturing sector. We illustrate how the Gibbs sampling methodology can be easily used to calibrate parameter values and elasticities on the basis of previous knowledge from alternative studies on the same data, but with different functional forms. We rely on a system of mixed non-informative diffuse priors for some key parameters and informative tight priors for others. Within the Gibbs sampler, we employ rejection sampling to incorporate parameter restrictions, which are suggested by economic theory but in general rejected by economic data. Our results show that values of those parameters that relate to non-informative priors are almost equal to the standard SUR estimates, whereas differences come out for those parameters to which we have assigned informative priors. Moreover, discrepancies can be appreciated in some crucial parameter estimates obtained with or without rejection sampling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号