首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The main goal of both Bayesian model selection and classical hypotheses testing is to make inferences with respect to the state of affairs in a population of interest. The main differences between both approaches are the explicit use of prior information by Bayesians, and the explicit use of null distributions by the classicists. Formalization of prior information in prior distributions is often difficult. In this paper two practical approaches (encompassing priors and training data) to specify prior distributions will be presented. The computation of null distributions is relatively easy. However, as will be illustrated, a straightforward interpretation of the resulting p-values is not always easy. Bayesian model selection can be used to compute posterior probabilities for each of a number of competing models. This provides an alternative for the currently prevalent testing of hypotheses using p-values. Both approaches will be compared and illustrated using case studies. Each case study fits in the framework of the normal linear model, that is, analysis of variance and multiple regression.  相似文献   

2.
In this paper we propose an approach to both estimate and select unknown smooth functions in an additive model with potentially many functions. Each function is written as a linear combination of basis terms, with coefficients regularized by a proper linearly constrained Gaussian prior. Given any potentially rank deficient prior precision matrix, we show how to derive linear constraints so that the corresponding effect is identified in the additive model. This allows for the use of a wide range of bases and precision matrices in priors for regularization. By introducing indicator variables, each constrained Gaussian prior is augmented with a point mass at zero, thus allowing for function selection. Posterior inference is calculated using Markov chain Monte Carlo and the smoothness in the functions is both the result of shrinkage through the constrained Gaussian prior and model averaging. We show how using non-degenerate priors on the shrinkage parameters enables the application of substantially more computationally efficient sampling schemes than would otherwise be the case. We show the favourable performance of our approach when compared to two contemporary alternative Bayesian methods. To highlight the potential of our approach in high-dimensional settings we apply it to estimate two large seemingly unrelated regression models for intra-day electricity load. Both models feature a variety of different univariate and bivariate functions which require different levels of smoothing, and where component selection is meaningful. Priors for the error disturbance covariances are selected carefully and the empirical results provide a substantive contribution to the electricity load modelling literature in their own right.  相似文献   

3.
Bayesian model selection with posterior probabilities and no subjective prior information is generally not possible because of the Bayes factors being ill‐defined. Using careful consideration of the parameter of interest in cointegration analysis and a re‐specification of the triangular model of Phillips (Econometrica, Vol. 59, pp. 283–306, 1991), this paper presents an approach that allows for Bayesian comparison of models of cointegration with ‘ignorance’ priors. Using the concept of Stiefel and Grassman manifolds, diffuse priors are specified on the dimension and direction of the cointegrating space. The approach is illustrated using a simple term structure of the interest rates model.  相似文献   

4.
In the Bayesian approach to model selection and hypothesis testing, the Bayes factor plays a central role. However, the Bayes factor is very sensitive to prior distributions of parameters. This is a problem especially in the presence of weak prior information on the parameters of the models. The most radical consequence of this fact is that the Bayes factor is undetermined when improper priors are used. Nonetheless, extending the non-informative approach of Bayesian analysis to model selection/testing procedures is important both from a theoretical and an applied viewpoint. The need to develop automatic and robust methods for model comparison has led to the introduction of several alternative Bayes factors. In this paper we review one of these methods: the fractional Bayes factor (O'Hagan, 1995). We discuss general properties of the method, such as consistency and coherence. Furthermore, in addition to the original, essentially asymptotic justifications of the fractional Bayes factor, we provide further finite-sample motivations for its use. Connections and comparisons to other automatic methods are discussed and several issues of robustness with respect to priors and data are considered. Finally, we focus on some open problems in the fractional Bayes factor approach, and outline some possible answers and directions for future research.  相似文献   

5.
This paper will present a Bayes factor for the comparison of an inequality constrained hypothesis with its complement or an unconstrained hypothesis. Equivalent sets of hypotheses form the basis for the quantification of the complexity of an inequality constrained hypothesis. It will be shown that the prior distribution can be chosen such that one of the terms in the Bayes factor is the quantification of the complexity of the hypothesis of interest. The other term in the Bayes factor represents a measure of the fit of the hypothesis. Using a vague prior distribution this fit value is essentially determined by the data. The result is an objective Bayes factor.  相似文献   

6.
In two recent articles, Sims (1988) and Sims and Uhlig (1988/1991) question the value of much of the ongoing literature on unit roots and stochastic trends. They characterize the seeds of this literature as ‘sterile ideas’, the application of nonstationary limit theory as ‘wrongheaded and unenlightening’, and the use of classical methods of inference as ‘unreasonable’ and ‘logically unsound’. They advocate in place of classical methods an explicit Bayesian approach to inference that utilizes a flat prior on the autoregressive coefficient. DeJong and Whiteman adopt a related Bayesian approach in a group of papers (1989a,b,c) that seek to re-evaluate the empirical evidence from historical economic time series. Their results appear to be conclusive in turning around the earlier, influential conclusions of Nelson and Plosser (1982) that most aggregate economic time series have stochastic trends. So far these criticisms of unit root econometrics have gone unanswered; the assertions about the impropriety of classical methods and the superiority of flat prior Bayesian methods have been unchallenged; and the empirical re-evaluation of evidence in support of stochastic trends has been left without comment. This paper breaks that silence and offers a new perspective. We challenge the methods, the assertions, and the conclusions of these articles on the Bayesian analysis of unit roots. Our approach is also Bayesian but we employ what are known in the statistical literature as objective ignorance priors in our analysis. These are developed in the paper to accommodate explicitly time series models in which no stationarity assumption is made. Ignorance priors are intended to represent a state of ignorance about the value of a parameter and in many models are very different from flat priors. We demonstrate that in time series models flat priors do not represent ignorance but are actually informative (sic) precisely because they neglect generically available information about how autoregressive coefficients influence observed time series characteristics. Contrary to their apparent intent, flat priors unwittingly bias inferences towards stationary and i.i.d. alternatives where they do represent ignorance, as in the linear regression model. This bias helps to explain the outcome of the simulation experiments in Sims and Uhlig and some of the empirical results of DeJong and Whiteman. Under both flat priors and ignorance priors this paper derives posterior distributions for the parameters in autoregressive models with a deterministic trend and an arbitrary number of lags. Marginal posterior distributions are obtained by using the Laplace approximation for multivariate integrals along the lines suggested by the author (Phillips, 1983) in some earlier work. The bias towards stationary models that arises from the use of flat priors is shown in our simulations to be substantial; and we conclude that it is unacceptably large in models with a fitted deterministic trend, for which the expected posterior probability of a stochastic trend is found to be negligible even though the true data generating mechanism has a unit root. Under ignorance priors, Bayesian inference is shown to accord more closely with the results of classical methods. An interesting outcome of our simulations and our empirical work is the bimodal Bayesian posterior, which demonstrates that Bayesian confidence sets can be disjoint, just like classical confidence intervals that are based on asymptotic theory. The paper concludes with an empirical application of our Bayesian methodology to the Nelson-Plosser series. Seven of the 14 series show evidence of stochastic trends under ignorance priors, whereas under flat priors on the coefficients all but three of the series appear trend stationary. The latter result corresponds closely with the conclusion reached by DeJong and Whiteman (1989b) (based on truncated flat priors). We argue that the DeJong-Whiteman inferences are biased towards trend stationarity through the use of flat priors on the autoregressive coefficients, and that their inferences for some of the series (especially stock prices) are fragile (i.e. not robust) not only to the prior but also to the lag length chosen in the time series specification.  相似文献   

7.
Bayesian hypothesis testing in latent variable models   总被引:1,自引:0,他引:1  
Hypothesis testing using Bayes factors (BFs) is known not to be well defined under the improper prior. In the context of latent variable models, an additional problem with BFs is that they are difficult to compute. In this paper, a new Bayesian method, based on the decision theory and the EM algorithm, is introduced to test a point hypothesis in latent variable models. The new statistic is a by-product of the Bayesian MCMC output and, hence, easy to compute. It is shown that the new statistic is appropriately defined under improper priors because the method employs a continuous loss function. In addition, it is easy to interpret. The method is illustrated using a one-factor asset pricing model and a stochastic volatility model with jumps.  相似文献   

8.
The likelihood of the parameters in structural macroeconomic models typically has non‐identification regions over which it is constant. When sufficiently diffuse priors are used, the posterior piles up in such non‐identification regions. Use of informative priors can lead to the opposite, so both can generate spurious inference. We propose priors/posteriors on the structural parameters that are implied by priors/posteriors on the parameters of an embedding reduced‐form model. An example of such a prior is the Jeffreys prior. We use it to conduct Bayesian limited‐information inference on the new Keynesian Phillips curve with a VAR reduced form for US data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
In Bayesian analysis of vector autoregressive models, and especially in forecasting applications, the Minnesota prior of Litterman is frequently used. In many cases other prior distributions provide better forecasts and are preferable from a theoretical standpoint. Several of these priors require numerical methods in order to evaluate the posterior distribution. Different ways of implementing Monte Carlo integration are considered. It is found that Gibbs sampling performs as well as, or better, then importance sampling and that the Gibbs sampling algorithms are less adversely affected by model size. We also report on the forecasting performance of the different prior distributions. © 1997 by John Wiley & Sons, Ltd.  相似文献   

10.
In a 1974 paper, the author indicated how natural conjugate priors for multi-dimensional exponential family likelihoods could be enriched in certain cases through linear transformations of independent marginal priors. In particular, it was shown how the usual Normal-Wishart prior for the multinormal distribution with unknown mean vector and precision matrix could have the number of hyperparameters increased; the ‘thinness’ of the traditional prior is well- known. The new, linearly dependent prior leads to full-dimensional credibility prediction formulae for the observational mean vector and covariance matrix, as contrasted with the simpler, self-dimensional forecasts obtained in prior literature. However, there was an error made in the sufficient-statistics term of the covariance predictor which is corrected in this work. In addition, this paper explains in detail the properties of the enriched multinormal prior and why revised statistics are needed, and interprets the important relationship between the linear transformation matrix and the matrix of credibility time constants. An enumeration of the additional number of hyperparameters needed for the enriched prior shows its value in modelling multinormal problems; it is shown that the estimation of these hyperparameters can be carried out in a natural way, in the space of the observable variables.  相似文献   

11.
This paper presents a Bayesian analysis of an ordered probit model with endogenous selection. The model can be applied when analyzing ordered outcomes that depend on endogenous covariates that are discrete choice indicators modeled by a multinomial probit model. The model is illustrated by analyzing the effects of different types of medical insurance plans on the level of hospital utilization, allowing for potential endogeneity of insurance status. The estimation is performed using the Markov chain Monte Carlo (MCMC) methods to approximate the posterior distribution of the parameters in the model.  相似文献   

12.
This paper develops methods of Bayesian inference in a sample selection model. The main feature of this model is that the outcome variable is only partially observed. We first present a Gibbs sampling algorithm for a model in which the selection and outcome errors are normally distributed. The algorithm is then extended to analyze models that are characterized by nonnormality. Specifically, we use a Dirichlet process prior and model the distribution of the unobservables as a mixture of normal distributions with a random number of components. The posterior distribution in this model can simultaneously detect the presence of selection effects and departures from normality. Our methods are illustrated using some simulated data and an abstract from the RAND health insurance experiment.  相似文献   

13.
Lipman [Lipman, B., 2003. Finite order implications of common priors, Econometrica, 71 (July), 1255–1267] shows that in a finite model, the common prior assumption has weak implications for finite orders of beliefs about beliefs. In particular, the only such implications are those stemming from the weaker assumption of a common support. To explore the role of the finite model assumption in generating this conclusion, this paper considers the finite order implications of common priors in the simplest possible infinite model, namely, a countable model. I show that in countable models, the common prior assumption also implies a tail consistency condition regarding beliefs. More specifically, I show that in a countable model, the finite order implications of the common prior assumption are the same as those stemming from the assumption that priors have a common support and have tail probabilities converging to zero at the same rate.  相似文献   

14.
A class of global-local hierarchical shrinkage priors for estimating large Bayesian vector autoregressions (BVARs) has recently been proposed. We question whether three such priors: Dirichlet-Laplace, Horseshoe, and Normal-Gamma, can systematically improve the forecast accuracy of two commonly used benchmarks (the hierarchical Minnesota prior and the stochastic search variable selection (SSVS) prior), when predicting key macroeconomic variables. Using small and large data sets, both point and density forecasts suggest that the answer is no. Instead, our results indicate that a hierarchical Minnesota prior remains a solid practical choice when forecasting macroeconomic variables. In light of existing optimality results, a possible explanation for our finding is that macroeconomic data is not sparse, but instead dense.  相似文献   

15.
Bayesian priors are often used to restrain the otherwise highly over‐parametrized vector autoregressive (VAR) models. The currently available Bayesian VAR methodology does not allow the user to specify prior beliefs about the unconditional mean, or steady state, of the system. This is unfortunate as the steady state is something that economists usually claim to know relatively well. This paper develops easily implemented methods for analyzing both stationary and cointegrated VARs, in reduced or structural form, with an informative prior on the steady state. We document that prior information on the steady state leads to substantial gains in forecasting accuracy on Swedish macro data. A second example illustrates the use of informative steady‐state priors in a cointegration model of the consumption‐wealth relationship in the USA. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
We develop a novel Bayesian doubly adaptive elastic-net Lasso (DAELasso) approach for VAR shrinkage. DAELasso achieves variable selection and coefficient shrinkage in a data-based manner. It deals constructively with explanatory variables which tend to be highly collinear by encouraging the grouping effect. In addition, it also allows for different degrees of shrinkage for different coefficients. Rewriting the multivariate Laplace distribution as a scale mixture, we establish closed-form conditional posteriors that can be drawn from a Gibbs sampler. An empirical analysis shows that the forecast results produced by DAELasso and its variants are comparable to those from other popular Bayesian methods, which provides further evidence that the forecast performances of large and medium sized Bayesian VARs are relatively robust to prior choices, and, in practice, simple Minnesota types of priors can be more attractive than their complex and well-designed alternatives.  相似文献   

17.
This paper is a comment on P. C. B. Phillips, ‘To criticise the critics: an objective Bayesian analysis of stochastic trends’ [Phillips, (1991)]. Departing from the likelihood of an univariate autoregressive model different routes that lead to a posterior odds analysis of the unit root hypothesis are explored, where the differences in routes are due to the different choices of the prior. Improper priors like the uniform and the Jeffreys prior are less suited for Bayesian inference on a sharp null hypothesis as the unit root. A proper normal prior on the mean of the process is analysed and empirical results using extended Nelson-Plosser data are presented.  相似文献   

18.
Hedonic price models are widely employed to estimate implicit prices for bundled attributes. Residential property value studies dominate these applications. Using a representative cross-sectional property value data set, we employ Bayesian methods to translate a range of priors in covariate selection typical of hedonic property value studies into a range of posterior estimates. We also formulate priors regarding measurement error in individual covariates and compute the ranges of resulting posterior means. Finally, we empirically demonstrate that a greater and more systematic use of prior information drawn from one's own data and from other studies can break the collinearity deadlock in this data.  相似文献   

19.
Bayesian and empirical Bayesian estimation methods are reviewed and proposed for the row and column parameters in two-way Contingency tables without interaction. Rasch's multiplicative Poisson model for misreadings is discussed in an example. The case is treated where assumptions of exchangeability are reasonable a priori for the unknown parameters. Two different types of prior distributions are compared, It appears that gamma priors yield more tractable results than lognormal priors.  相似文献   

20.
We consider the problem of variable selection in linear regression models. Bayesian model averaging has become an important tool in empirical settings with large numbers of potential regressors and relatively limited numbers of observations. We examine the effect of a variety of prior assumptions on the inference concerning model size, posterior inclusion probabilities of regressors and on predictive performance. We illustrate these issues in the context of cross‐country growth regressions using three datasets with 41–67 potential drivers of growth and 72–93 observations. Finally, we recommend priors for use in this and related contexts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号