共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper deals with Bayesian selection of models that can be specified using inequality constraints among the model parameters. The concept of encompassing priors is introduced, that is, a prior distribution for an unconstrained model from which the prior distributions of the constrained models can be derived. It is shown that the Bayes factor for the encompassing and a constrained model has a very nice interpretation: it is the ratio of the proportion of the prior and posterior distribution of the encompassing model in agreement with the constrained model. It is also shown that, for a specific class of models, selection based on encompassing priors will render a virtually objective selection procedure. The paper concludes with three illustrative examples: an analysis of variance with ordered means; a contingency table analysis with ordered odds-ratios; and a multilevel model with ordered slopes. 相似文献
2.
We describe a method for estimating the marginal likelihood, based on Chib (1995) and C hib and Jeliazkov (2001) , when simulation from the posterior distribution of the model parameters is by the accept–reject Metropolis–Hastings (ARMH) algorithm. The method is developed for one-block and multiple-block ARMH algorithms and does not require the (typically) unknown normalizing constant of the proposal density. The problem of calculating the numerical standard error of the estimates is also considered and a procedure based on batch means is developed. Two examples, dealing with a multinomial logit model and a Gaussian regression model with non-conjugate priors, are provided to illustrate the efficiency and applicability of the method. 相似文献
3.
《International Journal of Forecasting》2019,35(2):580-600
We compare real-time density forecasts for the euro area using three DSGE models. The benchmark is the Smets and Wouters model, and its forecasts of real GDP growth and inflation are compared with those from two extensions. The first adds financial frictions and expands the observables to include a measure of the external finance premium. The second allows for the extensive labor-market margin and adds the unemployment rate to the observables. The main question that we address is whether these extensions improve the density forecasts of real GDP and inflation and their joint forecasts up to an eight-quarter horizon. We find that adding financial frictions leads to a deterioration in the forecasts, with the exception of longer-term inflation forecasts and the period around the Great Recession. The labor market extension improves the medium- to longer-term real GDP growth and shorter- to medium-term inflation forecasts weakly compared with the benchmark model. 相似文献
4.
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency. 相似文献
5.
Fisher and "Student" quarreled in the early days of statistics about the design of experiments, meant to measure the difference in yield between to breeds of corn. This discussion comes down to randomization versus model building. More than half a century has passed since, but the different views remain. In this paper the discussion is put in terms of artificial randomization and natural randomization, the latter being what remains after appropriate modeling. Also the Bayesian position is discussed. An example in terms of the old corn-breeding discussion is given, showing that a simple robust model may lead to inference and experimental design that outperforms the inference from randomized experiments by far. Finally similar possibilities are suggested in statistical auditing. 相似文献
6.
现代贝叶斯统计学理论与方法研究 总被引:1,自引:0,他引:1
文章从统计理论基础和统计方法两个方面对该方法与传统经典频率统计学之间的区别和联系进行比较分析,从而达到对贝叶斯统计学理论和方法的全面认识,最后对统计工作中两种方法的应用和探究提出建议且进行了展望。 相似文献
7.
This paper criticizes the use of regression in audit samples to obtain confidence intervals for the error rate. Also the methodology to evaluate methods by simulation studies using real–life populations is criticized. This is done from a Bayesian viewpoint, which goes as far as stating that in this type of research the wrong questions are answered. A fundamental discussion on the role of model building, illustrated by the use of models in auditing, forms the centre of the paper. 相似文献
8.
Luis Raúl Pericchi Guerra was born in Caracas, Venezuela, on 11 March 1952. He completed a B.S. in Mathematics in 1975 at the Universidad Simón Bolívar in Caracas, an M.S. in Statistics at the University of California Berkeley in 1978 and a Ph.D. in Statistics at Imperial College London in 1981. After graduating from Imperial College, Luis Raúl went back to Universidad Simón Bolívar. There, he played a key role in the developing of graduate programmes in Statistics and single handedly built an internationally recognised group focused on Bayesian statistics. In 2001, he moved to the Universidad de Puerto Rico in Rio Piedras to become the Chair of the Mathematics Department. At Universidad de Puerto Rico, he was instrumental in the establishment of a Ph.D. track in Computational Mathematics and Statistics. Luis Raúl has published over 120 papers in statistical and domain-specific journals, making significant contributions to several areas of Bayesian statistics (especially in the areas of model selection and Bayesian robustness) and their application (especially in hydrology). He is a Fellow of the American Statistical Association, the International Society for Bayesian Analysis, the John Simon Guggenheim Memorial Foundation and an Elected Member of the International Statistical Institute. This conversation took place over multiple sessions during the 2022 O'Bayes meeting in Santa Cruz, California, and the months that followed. 相似文献
9.
Robert B. Gramacy 《Revue internationale de statistique》2020,88(2):326-329
I thoroughly enjoyed reading the article by Bhadra et. al. (2020) and convey my congratulations to the authors for providing a comprehensive and coherent review of horseshoe-based regularization approaches for machine learning models. I am thankful to the editors for providing this opportunity to write a discussion on this useful article, which I expect will turn out to be a good guide in the future for statisticians and practitioners alike. It is quite amazing to see the rapid progress and the magnitude of work advancing the horseshoe regularization approach since the seminal paper by Carvalho et al. (2010). The current review article is a testimony for this. While I have been primarily working with continuous spike and slab priors for high-dimensional Bayesian modeling, I have been following the literature on horseshoe regularization with a keen interest. For my comments on this article, I will focus on some comparisons between these two approaches particularly in terms of model building and methodology and some computational considerations. I would like to first provide some comments on performing valid inference based on the horsheshoe prior framework. 相似文献
10.
《International Journal of Forecasting》2021,37(4):1355-1375
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster. 相似文献
11.
John Aldrich 《Revue internationale de statistique》2002,70(1):79-98
This paper considers how the concepts of likelihood and identification became part of Bayesian theory. This makes a nice study in the development of concepts in statistical theory. Likelihood slipped in easily but there was a protracted debate about how identification should be treated. Initially there was no agreement on whether identification involved the prior, the likelihood or the posterior. 相似文献
12.
The problem of forecasting a time series with only a small amount of data is addressed within a Bayesian framework. The quantity to be predicted is the accumulated value of a positive and continuous variable for which partially accumulated data are available. These conditions appear in a natural way in many situations. A simple model is proposed to describe the relationship between the partial and total values of the variable to be forecasted assuming stable seasonality, which is specified in stochastic terms. Analytical results are obtained for both the point forecast and the entire posterior predictive distribution. The proposed technique does not involve approximations. It allows the use of non-informative priors so that implementation may be automatic. The procedure works well when standard methods cannot be applied due to the reduced number of observations. It also improves on previous results published by the authors. Some real examples are included. 相似文献
13.
《International Journal of Forecasting》2014,30(4):928-933
Croston’s method is generally viewed as being superior to exponential smoothing when the demand is intermittent, but it has the drawbacks of bias and an inability to deal with obsolescence, where the demand for an item ceases altogether. Several variants have been reported, some of which are unbiased on certain types of demand, but only one recent variant addresses the problem of obsolescence. We describe a new hybrid of Croston’s method and Bayesian inference called Hyperbolic-Exponential Smoothing, which is unbiased on non-intermittent and stochastic intermittent demand, decays hyperbolically when obsolescence occurs, and performs well in experiments. 相似文献
14.
The use of stochastic models and performance measures for the analysis of real life queuing scenarios are based on the fundamental premise that parameters values are known. This is a rarity since more often than not, parameters are usually unknown and require to be estimated. This paper presents techniques for the same from Bayesian perspective. The queue we intend to deal with is the M/M/1 queuing model. Several closed form expressions on posterior inference and prediction are presented which can be readily implemented using standard spreadsheet tools. Previous work in this direction resulted in non-existence of posterior moments. A way out is suggested. Interval estimates and tests of hypothesis on performance measures are also presented. 相似文献
15.
The Dirichlet‐multinomial process can be seen as the generalisation of the binomial model with beta prior distribution when the number of categories is larger than two. In such a scenario, setting informative prior distributions when the number of categories is great becomes difficult, so the need for an objective approach arises. However, what does objective mean in the Dirichlet‐multinomial process? To deal with this question, we study the sensitivity of the posterior distribution to the choice of an objective Dirichlet prior from those presented in the available literature. We illustrate the impact of the selection of the prior distribution in several scenarios and discuss the most sensible ones. 相似文献
16.
This paper studies the role of non-pervasive shocks when forecasting with factor models. To this end, we first introduce a new model that incorporates the effects of non-pervasive shocks, an Approximate Dynamic Factor Model with a sparse model for the idiosyncratic component. Then, we test the forecasting performance of this model both in simulations, and on a large panel of US quarterly data. We find that, when the goal is to forecast a disaggregated variable, which is usually affected by regional or sectorial shocks, it is useful to capture the dynamics generated by non-pervasive shocks; however, when the goal is to forecast an aggregate variable, which responds primarily to macroeconomic, i.e. pervasive, shocks, accounting for non-pervasive shocks is not useful. 相似文献
17.
Hyperparameter estimation in dynamic linear models leads to inference that is not available analytically. Recently, the most common approach is through MCMC approximations. A number of sampling schemes that have been proposed in the literature are compared. They basically differ in their blocking structure. In this paper, comparison between the most common schemes is performed in terms of different efficiency criteria, including efficiency ratio and processing time. A sample of time series was simulated to reflect different relevant features such as series length and system volatility. 相似文献
18.
On some properties of Markov chain Monte Carlo simulation methods based on the particle filter 总被引:2,自引:0,他引:2
Andrieu et al. (2010) prove that Markov chain Monte Carlo samplers still converge to the correct posterior distribution of the model parameters when the likelihood estimated by the particle filter (with a finite number of particles) is used instead of the likelihood. A critical issue for performance is the choice of the number of particles. We add the following contributions. First, we provide analytically derived, practical guidelines on the optimal number of particles to use. Second, we show that a fully adapted auxiliary particle filter is unbiased and can drastically decrease computing time compared to a standard particle filter. Third, we introduce a new estimator of the likelihood based on the output of the auxiliary particle filter and use the framework of Del Moral (2004) to provide a direct proof of the unbiasedness of the estimator. Fourth, we show that the results in the article apply more generally to Markov chain Monte Carlo sampling schemes with the likelihood estimated in an unbiased manner. 相似文献
19.
Parameter estimation under model uncertainty is a difficult and fundamental issue in econometrics. This paper compares the performance of various model averaging techniques. In particular, it contrasts Bayesian model averaging (BMA) — currently one of the standard methods used in growth empirics — with a new method called weighted-average least squares (WALS). The new method has two major advantages over BMA: its computational burden is trivial and it is based on a transparent definition of prior ignorance. The theory is applied to and sheds new light on growth empirics where a high degree of model uncertainty is typically present. 相似文献
20.
In this paper we compare classical econometrics, calibration and Bayesian inference in the context of the empirical analysis of factor demands. Our application is based on a popular flexible functional form for the firm's cost function, namely Diewert's Generalized Leontief function, and uses the well-known Berndt and Wood 1947–1971 KLEM data on the US manufacturing sector. We illustrate how the Gibbs sampling methodology can be easily used to calibrate parameter values and elasticities on the basis of previous knowledge from alternative studies on the same data, but with different functional forms. We rely on a system of mixed non-informative diffuse priors for some key parameters and informative tight priors for others. Within the Gibbs sampler, we employ rejection sampling to incorporate parameter restrictions, which are suggested by economic theory but in general rejected by economic data. Our results show that values of those parameters that relate to non-informative priors are almost equal to the standard SUR estimates, whereas differences come out for those parameters to which we have assigned informative priors. Moreover, discrepancies can be appreciated in some crucial parameter estimates obtained with or without rejection sampling. 相似文献