首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

2.
This article develops a new portfolio selection method using Bayesian theory. The proposed method accounts for the uncertainties in estimation parameters and the model specification itself, both of which are ignored by the standard mean-variance method. The critical issue in constructing an appropriate predictive distribution for asset returns is evaluating the goodness of individual factors and models. This problem is investigated from a statistical point of view; we propose using the Bayesian predictive information criterion. Two Bayesian methods and the standard mean-variance method are compared through Monte Carlo simulations and in a real financial data set. The Bayesian methods perform very well compared to the standard mean-variance method.  相似文献   

3.
Abstract

This study develops two space-varying coefficient simultaneous autoregressive (SVC-SAR) models for areal data and applies them to the discrete/continuous choice model, which is an econometric model based on the consumer's utility maximization problem. The space-varying coefficient model is a statistical model in which the coefficients vary depending on their location. This study introduces the simultaneous autoregressive model for the underlying spatial dependence across coefficients, where the coefficients for one observation are affected by the sum of those for the other observations. This model is named the SVC-SAR model. Because of its flexibility, we use the Bayesian approach and construct its estimation method based on the Markov chain Monte Carlo simulation. The proposed models are applied to estimate the Japanese residential water demand function, which is an example of the discrete/continuous choice model.  相似文献   

4.
Recent developments in Markov chain Monte Carlo [MCMC] methods have increased the popularity of Bayesian inference in many fields of research in economics, such as marketing research and financial econometrics. Gibbs sampling in combination with data augmentation allows inference in statistical/econometric models with many unobserved variables. The likelihood functions of these models may contain many integrals, which often makes a standard classical analysis difficult or even unfeasible. The advantage of the Bayesian approach using MCMC is that one only has to consider the likelihood function conditional on the unobserved variables. In many cases this implies that Bayesian parameter estimation is faster than classical maximum likelihood estimation. In this paper we illustrate the computational advantages of Bayesian estimation using MCMC in several popular latent variable models.  相似文献   

5.
The paper develops a general Bayesian framework for robust linear static panel data models usingε-contamination. A two-step approach is employed to derive the conditional type-II maximum likelihood (ML-II) posterior distribution of the coefficients and individual effects. The ML-II posterior means are weighted averages of the Bayes estimator under a base prior and the data-dependent empirical Bayes estimator. Two-stage and three stage hierarchy estimators are developed and their finite sample performance is investigated through a series of Monte Carlo experiments. These include standard random effects as well as Mundlak-type, Chamberlain-type and Hausman–Taylor-type models. The simulation results underscore the relatively good performance of the three-stage hierarchy estimator. Within a single theoretical framework, our Bayesian approach encompasses a variety of specifications while conventional methods require separate estimators for each case.  相似文献   

6.
This paper extends the existing fully parametric Bayesian literature on stochastic volatility to allow for more general return distributions. Instead of specifying a particular distribution for the return innovation, nonparametric Bayesian methods are used to flexibly model the skewness and kurtosis of the distribution while the dynamics of volatility continue to be modeled with a parametric structure. Our semiparametric Bayesian approach provides a full characterization of parametric and distributional uncertainty. A Markov chain Monte Carlo sampling approach to estimation is presented with theoretical and computational issues for simulation from the posterior predictive distributions. An empirical example compares the new model to standard parametric stochastic volatility models.  相似文献   

7.
In this article, we propose new Monte Carlo methods for computing a single marginal likelihood or several marginal likelihoods for the purpose of Bayesian model comparisons. The methods are motivated by Bayesian variable selection, in which the marginal likelihoods for all subset variable models are required to compute. The proposed estimates use only a single Markov chain Monte Carlo (MCMC) output from the joint posterior distribution and it does not require the specific structure or the form of the MCMC sampling algorithm that is used to generate the MCMC sample to be known. The theoretical properties of the proposed method are examined in detail. The applicability and usefulness of the proposed method are demonstrated via ordinal data probit regression models. A real dataset involving ordinal outcomes is used to further illustrate the proposed methodology.  相似文献   

8.
We propose estimation of a stochastic production frontier model within a Bayesian framework to obtain the posterior distribution of single-input-oriented technical efficiency at the firm level. All computations are carried out using Markov chain Monte Carlo methods. The approach is illustrated by applying it to production data obtained from a survey of Ukrainian collective farms. We show that looking at the changes in single-input-oriented technical efficiency in addition to the changes in output-oriented technical efficiency improves the understanding of the dynamics of technical efficiency over the first years of transition in the former Soviet Union.  相似文献   

9.
Estimation and prediction in high dimensional multivariate factor stochastic volatility models is an important and active research area, because such models allow a parsimonious representation of multivariate stochastic volatility. Bayesian inference for factor stochastic volatility models is usually done by Markov chain Monte Carlo methods (often by particle Markov chain Monte Carlo methods), which are usually slow for high dimensional or long time series because of the large number of parameters and latent states involved. Our article makes two contributions. The first is to propose a fast and accurate variational Bayes methods to approximate the posterior distribution of the states and parameters in factor stochastic volatility models. The second is to extend this batch methodology to develop fast sequential variational updates for prediction as new observations arrive. The methods are applied to simulated and real datasets, and shown to produce good approximate inference and prediction compared to the latest particle Markov chain Monte Carlo approaches, but are much faster.  相似文献   

10.
Bayesian experimental design is a fast growing area of research with many real‐world applications. As computational power has increased over the years, so has the development of simulation‐based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.  相似文献   

11.
Bayesian and Frequentist Inference for Ecological Inference: The R×C Case   总被引:2,自引:1,他引:1  
In this paper we propose Bayesian and frequentist approaches to ecological inference, based on R × C contingency tables, including a covariate. The proposed Bayesian model extends the binomial-beta hierarchical model developed by K ing , R osen and T anner (1999) from the 2×2 case to the R × C case. As in the 2×2 case, the inferential procedure employs Markov chain Monte Carlo (MCMC) methods. As such, the resulting MCMC analysis is rich but computationally intensive. The frequentist approach, based on first moments rather than on the entire likelihood, provides quick inference via nonlinear least-squares, while retaining good frequentist properties. The two approaches are illustrated with simulated data, as well as with real data on voting patterns in Weimar Germany. In the final section of the paper we provide an overview of a range of alternative inferential approaches which trade-off computational intensity for statistical efficiency.  相似文献   

12.
The interplay between the Bayesian and Frequentist approaches: a general nesting spatial panel-data model. Spatial Economic Analysis. An econometric framework mixing the Frequentist and Bayesian approaches is proposed in order to estimate a general nesting spatial model. First, it avoids specific dependency structures between unobserved heterogeneity and regressors, which improves mixing properties of Markov chain Monte Carlo (MCMC) procedures in the presence of unobserved heterogeneity. Second, it allows model selection based on a strong statistical framework, characteristics that are not easily introduced using a Frequentist approach. We perform some simulation exercises, finding good performance of the properties of our approach, and apply the methodology to analyse the relation between productivity and public investment in the United States.  相似文献   

13.
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior probabilities and marginal densities using Monte Carlo integration methods like importance sampling or Markov chain Monte Carlo procedures the speed of the algorithm and the quality of the results greatly depend on the choice of the importance or candidate density. Such a density has to be ‘close’ to the target density in order to yield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. A key step in the proposed class of methods is the construction of a neural network that approximates the target density. The methods are tested on a set of illustrative IV regression models. The results indicate the possible usefulness of the neural network approach.  相似文献   

14.
This paper investigates whether there is time variation in the excess sensitivity of aggregate consumption growth to anticipated aggregate disposable income growth using quarterly US data over the period 1953–2014. Our empirical framework contains the possibility of stickiness in aggregate consumption growth and takes into account measurement error and time aggregation. Our empirical specification is cast into a Bayesian state‐space model and estimated using Markov chain Monte Carlo (MCMC) methods. We use a Bayesian model selection approach to deal with the non‐regular test for the null hypothesis of no time variation in the excess sensitivity parameter. Anticipated disposable income growth is calculated by incorporating an instrumental variables estimation approach into our MCMC algorithm. Our results suggest that the excess sensitivity parameter in the USA is stable at around 0.23 over the entire sample period. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
We describe a flexible geo-additive Bayesian survival model that controls, simultaneously, for spatial dependence and possible nonlinear or time-varying effects of other variables. Inference is fully Bayesian and is based on recently developed Markov Chain Monte Carlo techniques. In illustrating the model we introduce a spatial dimension in modelling under-five mortality among Malawian children using data from Malawi Demographic and Health Survey of 2000. The results show that district-level socioeconomic characteristics are important determinants of childhood mortality. More importantly, a separate spatial process produces district clustering of childhood mortality indicating the importance of spatial effects. The visual nature of the maps presented in this paper highlights relationships that would, otherwise, be overlooked in standard methods.  相似文献   

16.
During the last years, graphical models have become a popular tool to represent dependencies among variables in many scientific areas. Typically, the objective is to discover dependence relationships that can be represented through a directed acyclic graph (DAG). The set of all conditional independencies encoded by a DAG determines its Markov property. In general, DAGs encoding the same conditional independencies are not distinguishable from observational data and can be collected into equivalence classes, each one represented by a chain graph called essential graph (EG). However, both the DAG and EG space grow super exponentially in the number of variables, and so, graph structural learning requires the adoption of Markov chain Monte Carlo (MCMC) techniques. In this paper, we review some recent results on Bayesian model selection of Gaussian DAG models under a unified framework. These results are based on closed-form expressions for the marginal likelihood of a DAG and EG structure, which is obtained from a few suitable assumptions on the prior for model parameters. We then introduce a general MCMC scheme that can be adopted both for model selection of DAGs and EGs together with a couple of applications on real data sets.  相似文献   

17.
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements.  相似文献   

18.
Qualitative response models (QRM's) are analyzed from the Bayesian point of view, using diffuse and informative prior distributions. Exact finite-sample Bayesian and large-sample Bayesian and non-Bayesian estimation results are compared. In addition, the paper provides: (1) plots and discussion of the properties of likelihood functions for QRM's, (2) posterior distributions for logit models' derivatives and elasticities, (3) Bayesian prediction procedures for QRM's, (4) new estimates for the median and other fractiles of the logistic distribution, (5) posterior odds ratios for model selection problems, and (6) comparisons of two alternative Monte Carlo numerical integration procedures. It is concluded that asymptotic approximations are not accurate for small-to moderate-sized samples even when only a single input variable is used, and that operational Bayesian methods are available for providing both exact small-sample and large-sample approximate inferences for DRM's.  相似文献   

19.
The evaluation of seeding rules requires the use of probabilistic forecasting models both for individual matches and for the tournament. Prior papers have employed a match-level forecasting model and then used a Monte Carlo simulation of the tournament for estimating outcome probabilities, thus allowing an outcome uncertainty measure to be attached to each proposed seeding regime, for example. However, this approach does not take into account the uncertainty that may surround parameter estimates in the underlying match-level forecasting model. We propose a Bayesian approach for addressing this problem, and illustrate it by simulating the UEFA Champions League under alternative seeding regimes. We find that changes in 2015 tended to increase the uncertainty over progression to the knock-out stage, but made limited difference to which clubs would contest the final.  相似文献   

20.
Markov chain Monte Carlo (MCMC) methods have become a ubiquitous tool in Bayesian analysis. This paper implements MCMC methods for Bayesian analysis of stochastic frontier models using the WinBUGS package, a freely available software. General code for cross-sectional and panel data are presented and various ways of summarizing posterior inference are discussed. Several examples illustrate that analyses with models of genuine practical interest can be performed straightforwardly and model changes are easily implemented. Although WinBUGS may not be that efficient for more complicated models, it does make Bayesian inference with stochastic frontier models easily accessible for applied researchers and its generic structure allows for a lot of flexibility in model specification.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号