首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian model selection using encompassing priors   总被引:1,自引:0,他引:1  
This paper deals with Bayesian selection of models that can be specified using inequality constraints among the model parameters. The concept of encompassing priors is introduced, that is, a prior distribution for an unconstrained model from which the prior distributions of the constrained models can be derived. It is shown that the Bayes factor for the encompassing and a constrained model has a very nice interpretation: it is the ratio of the proportion of the prior and posterior distribution of the encompassing model in agreement with the constrained model. It is also shown that, for a specific class of models, selection based on encompassing priors will render a virtually objective selection procedure. The paper concludes with three illustrative examples: an analysis of variance with ordered means; a contingency table analysis with ordered odds-ratios; and a multilevel model with ordered slopes.  相似文献   

2.
In a 1974 paper, the author indicated how natural conjugate priors for multi-dimensional exponential family likelihoods could be enriched in certain cases through linear transformations of independent marginal priors. In particular, it was shown how the usual Normal-Wishart prior for the multinormal distribution with unknown mean vector and precision matrix could have the number of hyperparameters increased; the ‘thinness’ of the traditional prior is well- known. The new, linearly dependent prior leads to full-dimensional credibility prediction formulae for the observational mean vector and covariance matrix, as contrasted with the simpler, self-dimensional forecasts obtained in prior literature. However, there was an error made in the sufficient-statistics term of the covariance predictor which is corrected in this work. In addition, this paper explains in detail the properties of the enriched multinormal prior and why revised statistics are needed, and interprets the important relationship between the linear transformation matrix and the matrix of credibility time constants. An enumeration of the additional number of hyperparameters needed for the enriched prior shows its value in modelling multinormal problems; it is shown that the estimation of these hyperparameters can be carried out in a natural way, in the space of the observable variables.  相似文献   

3.
We propose two data-based priors for vector error correction models. Both priors lead to highly automatic approaches which require only minimal user input. For the first one, we propose a reduced rank prior which encourages shrinkage towards a low-rank, row-sparse, and column-sparse long-run matrix. For the second one, we propose the use of the horseshoe prior, which shrinks all elements of the long-run matrix towards zero. Two empirical investigations reveal that Bayesian vector error correction (BVEC) models equipped with our proposed priors scale well to higher dimensions and forecast well. In comparison to VARs in first differences, they are able to exploit the information in the level variables. This turns out to be relevant to improve the forecasts for some macroeconomic variables. A simulation study shows that the BVEC with data-based priors possesses good frequentist estimation properties.  相似文献   

4.
This article reflects upon Willmott's 1993 article (‘Strength is ignorance; slavery is freedom: managing culture in modern organizations’) by revisiting the idea of ‘Corporate Culturism’ and its relevance for contemporary developments in management and organization, including higher education. It incorporates a commentary on how 1984 and ‘Strength is ignorance’ have been read and some reflections on the genesis of the original 1993 article. It then expands on themes in ‘Strength is ignorance’ which are of continuing relevance, and draws out implications for our research and professional lives, as scholars, working in business schools.  相似文献   

5.
Traditional clustering algorithms are deterministic in the sense that a given dataset always leads to the same output partition. This article modifies traditional clustering algorithms whereby data are associated with a probability model, and clustering is carried out on the stochastic model parameters rather than the data. This is done in a principled way using a Bayesian approach which allows the assignment of posterior probabilities to output partitions. In addition, the approach incorporates prior knowledge of the output partitions using Bayesian melding. The methodology is applied to two substantive problems: (i) a question of stylometry involving a simulated dataset and (ii) the assessment of potential champions of the 2010 FIFA World Cup.  相似文献   

6.
In contrast to a posterior analysis given a particular sampling model, posterior model probabilities in the context of model uncertainty are typically rather sensitive to the specification of the prior. In particular, ‘diffuse’ priors on model-specific parameters can lead to quite unexpected consequences. Here we focus on the practically relevant situation where we need to entertain a (large) number of sampling models and we have (or wish to use) little or no subjective prior information. We aim at providing an ‘automatic’ or ‘benchmark’ prior structure that can be used in such cases. We focus on the normal linear regression model with uncertainty in the choice of regressors. We propose a partly non-informative prior structure related to a natural conjugate g-prior specification, where the amount of subjective information requested from the user is limited to the choice of a single scalar hyperparameter g0j. The consequences of different choices for g0j are examined. We investigate theoretical properties, such as consistency of the implied Bayesian procedure. Links with classical information criteria are provided. More importantly, we examine the finite sample implications of several choices of g0j in a simulation study. The use of the MC3 algorithm of Madigan and York (Int. Stat. Rev. 63 (1995) 215), combined with efficient coding in Fortran, makes it feasible to conduct large simulations. In addition to posterior criteria, we shall also compare the predictive performance of different priors. A classic example concerning the economics of crime will also be provided and contrasted with results in the literature. The main findings of the paper will lead us to propose a ‘benchmark’ prior specification in a linear regression context with model uncertainty.  相似文献   

7.
Bayesian priors are often used to restrain the otherwise highly over‐parametrized vector autoregressive (VAR) models. The currently available Bayesian VAR methodology does not allow the user to specify prior beliefs about the unconditional mean, or steady state, of the system. This is unfortunate as the steady state is something that economists usually claim to know relatively well. This paper develops easily implemented methods for analyzing both stationary and cointegrated VARs, in reduced or structural form, with an informative prior on the steady state. We document that prior information on the steady state leads to substantial gains in forecasting accuracy on Swedish macro data. A second example illustrates the use of informative steady‐state priors in a cointegration model of the consumption‐wealth relationship in the USA. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
We propose a natural conjugate prior for the instrumental variables regression model. The prior is a natural conjugate one since the marginal prior and posterior of the structural parameter have the same functional expressions which directly reveal the update from prior to posterior. The Jeffreys prior results from a specific setting of the prior parameters and results in a marginal posterior of the structural parameter that has an identical functional form as the sampling density of the limited information maximum likelihood estimator. We construct informative priors for the Angrist–Krueger [1991. Does compulsory school attendance affect schooling and earnings? Quarterly Journal of Economics 106, 979–1014] data and show that the marginal posterior of the return on education in the US coincides with the marginal posterior from the Southern region when we use the Jeffreys prior. This result occurs since the instruments are the strongest in the Southern region and the posterior using the Jeffreys prior, identical to maximum likelihood, focusses on the strongest available instruments. We construct informative priors for the other regions that make their posteriors of the return on education similar to that of the US and the Southern region. These priors show the amount of prior information needed to obtain comparable results for all regions.  相似文献   

9.
We focus on Bayesian model selection for the variable selection problem in large model spaces. The challenge is to search the huge model space adequately, while accurately approximating model posterior probabilities for the visited models. The issue of choice of prior distributions for the visited models is also important.  相似文献   

10.
Lipman [Lipman, B., 2003. Finite order implications of common priors, Econometrica, 71 (July), 1255–1267] shows that in a finite model, the common prior assumption has weak implications for finite orders of beliefs about beliefs. In particular, the only such implications are those stemming from the weaker assumption of a common support. To explore the role of the finite model assumption in generating this conclusion, this paper considers the finite order implications of common priors in the simplest possible infinite model, namely, a countable model. I show that in countable models, the common prior assumption also implies a tail consistency condition regarding beliefs. More specifically, I show that in a countable model, the finite order implications of the common prior assumption are the same as those stemming from the assumption that priors have a common support and have tail probabilities converging to zero at the same rate.  相似文献   

11.
This note contains first thoughts on awareness of unawareness in a simple dynamic context where a decision situation is repeated over time. The main consequence of increasing awareness is that the model the decision maker uses, and the prior which it contains, becomes richer over time. The decision maker is prepared to this change, and we show that if a projection-consistency axiom is satisfied unawareness does not affect the value of her estimate of a payoff-relevant conditional probability (although it may weaken confidence in such estimate). Probability-zero events however, pose a challenge to this axiom, and if that fails, even estimate values will be different if the decision maker takes unawareness into account. In examining evolution of knowledge about relevant variable through time, we distinguish between transition from uncertainty to certainty and from unawareness to certainty directly, and argue that new knowledge may cause posteriors to jump more if it is also new awareness. Some preliminary considerations on convergence of estimates are included.   相似文献   

12.
A class of global-local hierarchical shrinkage priors for estimating large Bayesian vector autoregressions (BVARs) has recently been proposed. We question whether three such priors: Dirichlet-Laplace, Horseshoe, and Normal-Gamma, can systematically improve the forecast accuracy of two commonly used benchmarks (the hierarchical Minnesota prior and the stochastic search variable selection (SSVS) prior), when predicting key macroeconomic variables. Using small and large data sets, both point and density forecasts suggest that the answer is no. Instead, our results indicate that a hierarchical Minnesota prior remains a solid practical choice when forecasting macroeconomic variables. In light of existing optimality results, a possible explanation for our finding is that macroeconomic data is not sparse, but instead dense.  相似文献   

13.
We construct a “Google Recession Index” (GRI) using Google Trends data on internet search popularity, which tracks the public’s attention to recession-related keywords in real time. We then compare nowcasts made with and without this index using both a standard dynamic factor model and a Bayesian approach with alternative prior setups. Our results indicate that using the Bayesian model with GRI-based “popularity priors,” we could identify the 2008Q3 turning point in real time, without sacrificing the accuracy of the nowcasts over the rest of the sample periods.  相似文献   

14.
In two recent articles, Sims (1988) and Sims and Uhlig (1988/1991) question the value of much of the ongoing literature on unit roots and stochastic trends. They characterize the seeds of this literature as ‘sterile ideas’, the application of nonstationary limit theory as ‘wrongheaded and unenlightening’, and the use of classical methods of inference as ‘unreasonable’ and ‘logically unsound’. They advocate in place of classical methods an explicit Bayesian approach to inference that utilizes a flat prior on the autoregressive coefficient. DeJong and Whiteman adopt a related Bayesian approach in a group of papers (1989a,b,c) that seek to re-evaluate the empirical evidence from historical economic time series. Their results appear to be conclusive in turning around the earlier, influential conclusions of Nelson and Plosser (1982) that most aggregate economic time series have stochastic trends. So far these criticisms of unit root econometrics have gone unanswered; the assertions about the impropriety of classical methods and the superiority of flat prior Bayesian methods have been unchallenged; and the empirical re-evaluation of evidence in support of stochastic trends has been left without comment. This paper breaks that silence and offers a new perspective. We challenge the methods, the assertions, and the conclusions of these articles on the Bayesian analysis of unit roots. Our approach is also Bayesian but we employ what are known in the statistical literature as objective ignorance priors in our analysis. These are developed in the paper to accommodate explicitly time series models in which no stationarity assumption is made. Ignorance priors are intended to represent a state of ignorance about the value of a parameter and in many models are very different from flat priors. We demonstrate that in time series models flat priors do not represent ignorance but are actually informative (sic) precisely because they neglect generically available information about how autoregressive coefficients influence observed time series characteristics. Contrary to their apparent intent, flat priors unwittingly bias inferences towards stationary and i.i.d. alternatives where they do represent ignorance, as in the linear regression model. This bias helps to explain the outcome of the simulation experiments in Sims and Uhlig and some of the empirical results of DeJong and Whiteman. Under both flat priors and ignorance priors this paper derives posterior distributions for the parameters in autoregressive models with a deterministic trend and an arbitrary number of lags. Marginal posterior distributions are obtained by using the Laplace approximation for multivariate integrals along the lines suggested by the author (Phillips, 1983) in some earlier work. The bias towards stationary models that arises from the use of flat priors is shown in our simulations to be substantial; and we conclude that it is unacceptably large in models with a fitted deterministic trend, for which the expected posterior probability of a stochastic trend is found to be negligible even though the true data generating mechanism has a unit root. Under ignorance priors, Bayesian inference is shown to accord more closely with the results of classical methods. An interesting outcome of our simulations and our empirical work is the bimodal Bayesian posterior, which demonstrates that Bayesian confidence sets can be disjoint, just like classical confidence intervals that are based on asymptotic theory. The paper concludes with an empirical application of our Bayesian methodology to the Nelson-Plosser series. Seven of the 14 series show evidence of stochastic trends under ignorance priors, whereas under flat priors on the coefficients all but three of the series appear trend stationary. The latter result corresponds closely with the conclusion reached by DeJong and Whiteman (1989b) (based on truncated flat priors). We argue that the DeJong-Whiteman inferences are biased towards trend stationarity through the use of flat priors on the autoregressive coefficients, and that their inferences for some of the series (especially stock prices) are fragile (i.e. not robust) not only to the prior but also to the lag length chosen in the time series specification.  相似文献   

15.
Summary In case of absolute error loss we investigate for an arbitrary class of probability distributions, if or if not a two point prior can be least favourable and a corresponding Bayes estimator can be minimax when the parameter is restricted to a closed and bounded interval of ℝ. The general results are applied to several examples, for instance location and scale parameter families are considered. We give examples for which, independent of the length of the parameter interval, no two point priors exist. On the other hand examples are given having a least favourable two point prior when the parameter interval is sufficiently small.  相似文献   

16.
Pischke ( 1995 ) uses both microeconomic and macroeconomic US data to test the idea that, within an otherwise standard PIH framework, ignorance by agents of aggregate labour income can account for the observed degree of excess smoothness and sensitivity in consumption. His tests involve only the second moments of aggregate consumption and labour income. In this paper our main aim is to identify and test the restrictions his model implies for aggregate consumption dynamics, using US quarterly data over the period 1959–1996, but our framework allows us also to test an earlier, related model of Goodfriend ( 1992 ). We find that both models can be formally rejected: ignorance of aggregate labour income cannot by itself account for aggregate consumption dynamics; some other relaxation of the assumptions of the standard PIH is required. We give an example of one possible such relaxation and present evidence indicating that Pischke's version of imperfect information may, within that framework, have a significant role to play. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

17.
The functional defined as the `min' of integrals with respect to probabilities in a given non-empty closed and convex class appears prominently in recent work on uncertainty in economics. In general, such a functional violates the additivity of the expectations operator. We characterize the types of functions over which additivity of this functional is preserved. This happens exactly when `integrating' functions which are positive affine transformations of each other (or when one is constant). We show that this result is quite general by restricting the types of classes of probabilities considered. Finally, we prove that with a very peculiar exception, all the results hold more generally for functionals which are linear combinations of the `min' and the `max' functional.  相似文献   

18.
The take-up of management training is lower in small than in large firms. A case is often made in the UK that this reflects an ignorance on the part of owner-managers in small firms of the benefits of training. In other words, the argument is often put that the market is subject to failure and so justifies the provision of subsidies. This paper critiques this view. The analysis shows that market forces explain rather better the lower take-up of management training by small firms. The authors therefore suggest the assumption of ignorance leading to market failure is unreasonable, and hence they question the case for further public-funded subsidies.  相似文献   

19.
20.
郑卫 《城市问题》2011,(1):83-88
以上海春申高压线事件为例,剖析我国城市规划冲突管理机制的缺陷,指出该事件从最初的意见对立演化为一定规模的群体性冲突行为,与政府对规划冲突的忽视和冲突事件处理的行政化倾向密切相关。传统的规划冲突管理机制存在决策内部化和管理行政化的严重缺陷,造成纠错功能的缺失和风险管理功能的失灵。在社会利益多元化和法治意识增强的社会背景下,必须进行城市规划冲突管理机制的改革。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号