首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The paper considers an elementary New-Keynesian three-equation model and compares its Bayesian estimation based on conventional priors to the results from the method of moments (MM), which seeks to match a finite set of the model-generated second moments of inflation, output and the interest rate to their empirical counterparts. It is found that in the Great Inflation (GI) period—though not quite in the Great Moderation (GM)—the two estimations imply a significantly different covariance structure. Regarding the parameters, special emphasis is placed on the degree of backward-looking behaviour in the Phillips curve. While, in line with much of the literature, it plays a minor role in the Bayesian estimations, MM yields values of the price indexation parameter close to or even at its maximal value of unity. For both GI and GM, these results are worth noticing since in (strong or, respectively, weak) contrast to the Bayesian parameters, the covariance matching thus achieved appears rather satisfactory.  相似文献   

2.
State space models play an important role in macroeconometric analysis and the Bayesian approach has been shown to have many advantages. This paper outlines recent developments in state space modelling applied to macroeconomics using Bayesian methods. We outline the directions of recent research, specifically the problems being addressed and the solutions proposed. After presenting a general form for the linear Gaussian model, we discuss the interpretations and virtues of alternative estimation routines and their outputs. This discussion includes the Kalman filter and smoother, and precision-based algorithms. As the advantages of using large models have become better understood, a focus has developed on dimension reduction and computational advances to cope with high-dimensional parameter spaces. We give an overview of a number of recent advances in these directions. Many models suggested by economic theory are either non-linear or non-Gaussian, or both. We discuss work on the particle filtering approach to such models as well as other techniques that use various approximations – to either the time state and measurement equations or to the full posterior for the states – to obtain draws.  相似文献   

3.
We propose and study the finite‐sample properties of a modified version of the self‐perturbed Kalman filter of Park and Jun (Electronics Letters 1992; 28 : 558–559) for the online estimation of models subject to parameter instability. The perturbation term in the updating equation of the state covariance matrix is weighted by the estimate of the measurement error variance. This avoids the calibration of a design parameter as the perturbation term is scaled by the amount of uncertainty in the data. It is shown by Monte Carlo simulations that this perturbation method is associated with a good tracking of the dynamics of the parameters compared to other online algorithms and to classical and Bayesian methods. The standardized self‐perturbed Kalman filter is adopted to forecast the equity premium on the S&P 500 index under several model specifications, and determines the extent to which realized variance can be used to predict excess returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement—for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model.  相似文献   

5.
We develop a Bayesian random compressed multivariate heterogeneous autoregressive (BRC-MHAR) model to forecast the realized covariance matrices of stock returns. The proposed model randomly compresses the predictors and reduces the number of parameters. We also construct several competing multivariate volatility models with the alternative shrinkage methods to compress the parameter’s dimensions. We compare the forecast performances of the proposed models with the competing models based on both statistical and economic evaluations. The results of statistical evaluation suggest that the BRC-MHAR models have the better forecast precision than the competing models for the short-term horizon. The results of economic evaluation suggest that the BRC-MHAR models are superior to the competing models in terms of the average return, the Shape ratio and the economic value.  相似文献   

6.
《Journal of econometrics》2005,124(2):311-334
We introduce a set of new Markov chain Monte Carlo algorithms for Bayesian analysis of the multinomial probit model. Our Bayesian representation of the model places a new, and possibly improper, prior distribution directly on the identifiable parameters and thus is relatively easy to interpret and use. Our algorithms, which are based on the method of marginal data augmentation, involve only draws from standard distributions and dominate other available Bayesian methods in that they are as quick to converge as the fastest methods but with a more attractive prior specification. C-code along with an R interface for our algorithms is publicly available.1  相似文献   

7.
This paper analyzes the endogeneity bias problem caused by associations of members within a network when the spatial autoregressive (SAR) model is used to study social interactions. When there are unobserved factors that affect both friendship decisions and economic outcomes, the spatial weight matrix (sociomatrix; adjacency matrix) in the SAR model, which represents the structure of a friendship network, might correlate with the disturbance term of the model, and consequently result in an endogenous selection problem in the outcomes. We consider this problem of selection bias with a modeling approach. In this approach, a statistical network model is adopted to explain the endogenous network formation process. By specifying unobserved components in both the network model and the SAR model, we capture the correlation between the processes of network and outcome formation, and propose a proper estimation procedure for the system. We demonstrate that the estimation of this system can be effectively done by using the Bayesian method. We provide a Monte Carlo experiment and an empirical application of this modeling approach on the friendship networks of high school students and their interactions on academic performance in the Add Health data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
We examine the difference between Bayesian and frequentist statistics in making statements about the relationship between observable values. We show how standard models under both paradigms can be based on an assumption of exchangeability and we derive useful covariance and correlation results for values from an exchangeable sequence. We find that such values are never negatively correlated, and are generally positively correlated under the models used in Bayesian statistics. We discuss the significance of this result as well as a phenomenon which often follows from the differing methodologies and practical applications of these paradigms – a phenomenon we call Bayes' effect.  相似文献   

9.
We propose a general class of models and a unified Bayesian inference methodology for flexibly estimating the density of a response variable conditional on a possibly high-dimensional set of covariates. Our model is a finite mixture of component models with covariate-dependent mixing weights. The component densities can belong to any parametric family, with each model parameter being a deterministic function of covariates through a link function. Our MCMC methodology allows for Bayesian variable selection among the covariates in the mixture components and in the mixing weights. The model’s parameterization and variable selection prior are chosen to prevent overfitting. We use simulated and real data sets to illustrate the methodology.  相似文献   

10.
The paper discusses the asymptotic validity of posterior inference of pseudo‐Bayesian quantile regression methods with complete or censored data when an asymmetric Laplace likelihood is used. The asymmetric Laplace likelihood has a special place in the Bayesian quantile regression framework because the usual quantile regression estimator can be derived as the maximum likelihood estimator under such a model, and this working likelihood enables highly efficient Markov chain Monte Carlo algorithms for posterior sampling. However, it seems to be under‐recognised that the stationary distribution for the resulting posterior does not provide valid posterior inference directly. We demonstrate that a simple adjustment to the covariance matrix of the posterior chain leads to asymptotically valid posterior inference. Our simulation results confirm that the posterior inference, when appropriately adjusted, is an attractive alternative to other asymptotic approximations in quantile regression, especially in the presence of censored data.  相似文献   

11.
We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear, non‐Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis–Hastings algorithm or in importance sampling. Our method provides a computationally more efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic intensity and stochastic volatility models based on Ornstein–Uhlenbeck processes. For our empirical study, we analyse the performance of our methods for corporate default panel data and stock index returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Bayesian model selection with posterior probabilities and no subjective prior information is generally not possible because of the Bayes factors being ill‐defined. Using careful consideration of the parameter of interest in cointegration analysis and a re‐specification of the triangular model of Phillips (Econometrica, Vol. 59, pp. 283–306, 1991), this paper presents an approach that allows for Bayesian comparison of models of cointegration with ‘ignorance’ priors. Using the concept of Stiefel and Grassman manifolds, diffuse priors are specified on the dimension and direction of the cointegrating space. The approach is illustrated using a simple term structure of the interest rates model.  相似文献   

13.
Predicting the evolution of mortality rates plays a central role for life insurance and pension funds. Various stochastic frameworks have been developed to model mortality patterns by taking into account the main stylized facts driving these patterns. However, relying on the prediction of one specific model can be too restrictive and can lead to some well-documented drawbacks, including model misspecification, parameter uncertainty, and overfitting. To address these issues we first consider mortality modeling in a Bayesian negative-binomial framework to account for overdispersion and the uncertainty about the parameter estimates in a natural and coherent way. Model averaging techniques are then considered as a response to model misspecifications. In this paper, we propose two methods based on leave-future-out validation and compare them to standard Bayesian model averaging (BMA) based on marginal likelihood. An intensive numerical study is carried out over a large range of simulation setups to compare the performances of the proposed methodologies. An illustration is then proposed on real-life mortality datasets, along with a sensitivity analysis to a Covid-type scenario. Overall, we found that both methods based on an out-of-sample criterion outperform the standard BMA approach in terms of prediction performance and robustness.  相似文献   

14.
This paper develops methods for estimating and forecasting in Bayesian panel vector autoregressions of large dimensions with time‐varying parameters and stochastic volatility. We exploit a hierarchical prior that takes into account possible pooling restrictions involving both VAR coefficients and the error covariance matrix, and propose a Bayesian dynamic learning procedure that controls for various sources of model uncertainty. We tackle computational concerns by means of a simulation‐free algorithm that relies on analytical approximations to the posterior. We use our methods to forecast inflation rates in the eurozone and show that these forecasts are superior to alternative methods for large vector autoregressions.  相似文献   

15.
A. S. Young 《Metrika》1987,34(1):325-339
Summary We treat the model selection problem in regression as a decision problem in which the decisions are the alternative predictive distributions based on the different sub-models and the parameter space is the set of possible future values of the regressand. The loss function balances out the conflicting needs for a predictive distribution with mean close to the true value ofy but without too great a variation. The treatment is Bayesian and the criterion derived is a Bayesian generalization of Mallows (1973)C p , the Bivar criterion (Young 1982) and AIC (Akaike 1974). An application using a graphical sensitivity analysis is presented.  相似文献   

16.
A Bayesian hierarchical mixed model is developed for multiple comparisons under a simple order restriction. The model facilitates inferences on the successive differences of the population means, for which we choose independent prior distributions that are mixtures of an exponential distribution and a discrete distribution with its entire mass at zero. We employ Markov Chain Monte Carlo (MCMC) techniques to obtain parameter estimates and estimates of the posterior probabilities that any two of the means are equal. The latter estimates allow one both to determine if any two means are significantly different and to test the homogeneity of all of the means. We investigate the performance of the model-based inferences with simulated data sets, focusing on parameter estimation and successive-mean comparisons using posterior probabilities. We then illustrate the utility of the model in an application based on data from a study designed to reduce lead blood concentrations in children with elevated levels. Our results show that the proposed hierarchical model can effectively unify parameter estimation, tests of hypotheses and multiple comparisons in one setting.  相似文献   

17.
Bayesian experimental design is a fast growing area of research with many real‐world applications. As computational power has increased over the years, so has the development of simulation‐based design methods, which involve a number of algorithms, such as Markov chain Monte Carlo, sequential Monte Carlo and approximate Bayes methods, facilitating more complex design problems to be solved. The Bayesian framework provides a unified approach for incorporating prior information and/or uncertainties regarding the statistical model with a utility function which describes the experimental aims. In this paper, we provide a general overview on the concepts involved in Bayesian experimental design, and focus on describing some of the more commonly used Bayesian utility functions and methods for their estimation, as well as a number of algorithms that are used to search over the design space to find the Bayesian optimal design. We also discuss other computational strategies for further research in Bayesian optimal design.  相似文献   

18.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

19.
In this paper we investigate a spatial Durbin error model with finite distributed lags and consider the Bayesian MCMC estimation of the model with a smoothness prior. We study also the corresponding Bayesian model selection procedure for the spatial Durbin error model, the spatial autoregressive model and the matrix exponential spatial specification model. We derive expressions of the marginal likelihood of the three models, which greatly simplify the model selection procedure. Simulation results suggest that the Bayesian estimates of high order spatial distributed lag coefficients are more precise than the maximum likelihood estimates. When the data is generated with a general declining pattern or a unimodal pattern for lag coefficients, the spatial Durbin error model can better capture the pattern than the SAR and the MESS models in most cases. We apply the procedure to study the effect of right to work (RTW) laws on manufacturing employment.  相似文献   

20.
This paper investigates whether there is time variation in the excess sensitivity of aggregate consumption growth to anticipated aggregate disposable income growth using quarterly US data over the period 1953–2014. Our empirical framework contains the possibility of stickiness in aggregate consumption growth and takes into account measurement error and time aggregation. Our empirical specification is cast into a Bayesian state‐space model and estimated using Markov chain Monte Carlo (MCMC) methods. We use a Bayesian model selection approach to deal with the non‐regular test for the null hypothesis of no time variation in the excess sensitivity parameter. Anticipated disposable income growth is calculated by incorporating an instrumental variables estimation approach into our MCMC algorithm. Our results suggest that the excess sensitivity parameter in the USA is stable at around 0.23 over the entire sample period. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号