首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe a method for estimating the marginal likelihood, based on Chib (1995) and C hib and Jeliazkov (2001) , when simulation from the posterior distribution of the model parameters is by the accept–reject Metropolis–Hastings (ARMH) algorithm. The method is developed for one-block and multiple-block ARMH algorithms and does not require the (typically) unknown normalizing constant of the proposal density. The problem of calculating the numerical standard error of the estimates is also considered and a procedure based on batch means is developed. Two examples, dealing with a multinomial logit model and a Gaussian regression model with non-conjugate priors, are provided to illustrate the efficiency and applicability of the method.  相似文献   

2.
We develop a sequential Monte Carlo (SMC) algorithm for estimating Bayesian dynamic stochastic general equilibrium (DSGE) models; wherein a particle approximation to the posterior is built iteratively through tempering the likelihood. Using two empirical illustrations consisting of the Smets and Wouters model and a larger news shock model we show that the SMC algorithm is better suited for multimodal and irregular posterior distributions than the widely used random walk Metropolis–Hastings algorithm. We find that a more diffuse prior for the Smets and Wouters model improves its marginal data density and that a slight modification of the prior for the news shock model leads to drastic changes in the posterior inference about the importance of news shocks for fluctuations in hours worked. Unlike standard Markov chain Monte Carlo (MCMC) techniques; the SMC algorithm is well suited for parallel computing. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
The paper asks how state of the art DSGE models that account for the conditional response of hours following a positive neutral technology shock compare in a marginal likelihood race. To that end we construct and estimate several competing small-scale DSGE models that extend the standard real business cycle model. In particular, we identify from the literature six different hypotheses that generate the empirically observed decline in hours worked after a positive technology shock. These models alternatively exhibit (i) sticky prices; (ii) firm entry and exit with time to build; (iii) habit in consumption and costly adjustment of investment; (iv) persistence in the permanent technology shocks; (v) labor market friction with procyclical hiring costs; and (vi) Leontief production function with labor-saving technology shocks. In terms of model posterior probabilities, impulse responses, and autocorrelations, the model favored is the one that exhibits habit formation in consumption and investment adjustment costs. A robustness test shows that the sticky price model becomes as competitive as the habit formation and costly adjustment of investment model when sticky wages are included.  相似文献   

4.
We improve the accuracy and speed of particle filtering for non-linear DSGE models with potentially non-normal shocks. This is done by introducing a new proposal distribution which (i) incorporates information from new observables and (ii) has a small optimization step that minimizes the distance to the optimal proposal distribution. A particle filter with this proposal distribution is shown to deliver a high level of accuracy even with relatively few particles, and it is therefore much more efficient than the standard particle filter.  相似文献   

5.
This paper develops an efficient approach to modelling and forecasting time series data with an unknown number of change-points. Using a conjugate prior and conditioning on time-invariant parameters, the predictive density and the posterior distribution of the change-points have closed forms. Furthermore, the conjugate prior is modeled as hierarchical in order to exploit the information across regimes. This framework allows breaks in the variance, the regression coefficients, or both. The regime duration can be modelled as a Poisson distribution. A new, efficient Markov chain Monte Carlo sampler draws the parameters from the posterior distribution as one block. An application to a Canadian inflation series shows the gains in forecasting precision that our model provides.  相似文献   

6.
Sample autocorrelation coefficients are widely used to test the randomness of a time series. Despite its unsatisfactory performance, the asymptotic normal distribution is often used to approximate the distribution of the sample autocorrelation coefficients. This is mainly due to the lack of an efficient approach in obtaining the exact distribution of sample autocorrelation coefficients. In this paper, we provide an efficient algorithm for evaluating the exact distribution of the sample autocorrelation coefficients. Under the multivariate elliptical distribution assumption, the exact distribution as well as exact moments and joint moments of sample autocorrelation coefficients are presented. In addition, the exact mean and variance of various autocorrelation-based tests are provided. Actual size properties of the Box–Pierce and Ljung–Box tests are investigated, and they are shown to be poor when the number of lags is moderately large relative to the sample size. Using the exact mean and variance of the Box–Pierce test statistic, we propose an adjusted Box–Pierce test that has a far superior size property than the traditional Box–Pierce and Ljung–Box tests.  相似文献   

7.
This paper is a study of the application of Bayesian exponentially tilted empirical likelihood to inference about quantile regressions. In the case of simple quantiles we show the exact form for the likelihood implied by this method and compare it with the Bayesian bootstrap and with Jeffreys' method. For regression quantiles we derive the asymptotic form of the posterior density. We also examine Markov chain Monte Carlo simulations with a proposal density formed from an overdispersed version of the limiting normal density. We show that the algorithm works well even in models with an endogenous regressor when the instruments are not too weak. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
We propose a way of testing a subset of equations of a DSGE model. The test draws on statistical inference for limited information models and the use of indirect inference to test DSGE models. Using the numerical small sample distribution of our test for two subsets of equations of the Smets–Wouters model we show that the test has accurate size and good power in small samples, and better power than using asymptotic distribution theory. In a test of the Smets–Wouters model on US Great Moderation data, we reject the specification of the wage‐price but not the expenditure sector. This points to the wage‐price sector as the source of overall model rejection.  相似文献   

9.
We propose a new methodology for designing flexible proposal densities for the joint posterior density of parameters and states in a nonlinear, non‐Gaussian state space model. We show that a highly efficient Bayesian procedure emerges when these proposal densities are used in an independent Metropolis–Hastings algorithm or in importance sampling. Our method provides a computationally more efficient alternative to several recently proposed algorithms. We present extensive simulation evidence for stochastic intensity and stochastic volatility models based on Ornstein–Uhlenbeck processes. For our empirical study, we analyse the performance of our methods for corporate default panel data and stock index returns. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, we introduce a new flexible mixed model for multinomial discrete choice where the key individual- and alternative-specific parameters of interest are allowed to follow an assumption-free nonparametric density specification, while other alternative-specific coefficients are assumed to be drawn from a multivariate Normal distribution, which eliminates the independence of irrelevant alternatives assumption at the individual level. A hierarchical specification of our model allows us to break down a complex data structure into a set of submodels with the desired features that are naturally assembled in the original system. We estimate the model, using a Bayesian Markov Chain Monte Carlo technique with a multivariate Dirichlet Process (DP) prior on the coefficients with nonparametrically estimated density. We employ a “latent class” sampling algorithm, which is applicable to a general class of models, including non-conjugate DP base priors. The model is applied to supermarket choices of a panel of Houston households whose shopping behavior was observed over a 24-month period in years 2004–2005. We estimate the nonparametric density of two key variables of interest: the price of a basket of goods based on scanner data, and driving distance to the supermarket based on their respective locations. Our semi-parametric approach allows us to identify a complex multi-modal preference distribution, which distinguishes between inframarginal consumers and consumers who strongly value either lower prices or shopping convenience.  相似文献   

11.
This paper investigates the accuracy of forecasts from four dynamic stochastic general equilibrium (DSGE) models for inflation, output growth and the federal funds rate using a real‐time dataset synchronized with the Fed's Greenbook projections. Conditioning the model forecasts on the Greenbook nowcasts leads to forecasts that are as accurate as the Greenbook projections for output growth and the federal funds rate. Only for inflation are the model forecasts dominated by the Greenbook projections. A comparison with forecasts from Bayesian vector autoregressions shows that the economic structure of the DSGE models which is useful for the interpretation of forecasts does not lower the accuracy of forecasts. Combining forecasts of several DSGE models increases precision in comparison to individual model forecasts. Comparing density forecasts with the actual distribution of observations shows that DSGE models overestimate uncertainty around point forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
This paper develops and applies tools to assess multivariate aspects of Bayesian Dynamic Stochastic General Equilibrium (DSGE) model forecasts and their ability to predict comovements among key macroeconomic variables. We construct posterior predictive checks to evaluate conditional and unconditional density forecasts, in addition to checks for root-mean-squared errors and event probabilities associated with these forecasts. The checks are implemented on a three-equation DSGE model as well as the Smets and Wouters (2007) model using real-time data. We find that the additional features incorporated into the Smets–Wouters model do not lead to a uniform improvement in the quality of density forecasts and prediction of comovements of output, inflation, and interest rates.  相似文献   

13.
Dynamic Stochastic General Equilibrium (DSGE) models are now considered attractive by the profession not only from the theoretical perspective but also from an empirical standpoint. As a consequence of this development, methods for diagnosing the fit of these models are being proposed and implemented. In this article we illustrate how the concept of statistical identification, that was introduced and used by Spanos [Spanos, Aris, 1990. The simultaneous-equations model revisited: Statistical adequacy and identification. Journal of Econometrics 44, 87–105] to criticize traditional evaluation methods of Cowles Commission models, could be relevant for DSGE models. We conclude that the recently proposed model evaluation method, based on the DSGE–VAR(λ)(λ), might not satisfy the condition for statistical identification. However, our application also shows that the adoption of a FAVAR as a statistically identified benchmark leaves unaltered the support of the data for the DSGE model and that a DSGE–FAVAR can be an optimal forecasting model.  相似文献   

14.
In this study we examine Lewellen’s (Rev Financ Stud 15:533–563 2002) claim that momentum in stock returns is not due to positive autocorrelation as behavioral models suggest. Using portfolio-specific data, we find the autocovariance component of the momentum profit to be negative, suggesting no return continuations. However, we also find that the autocorrelations calculated from short-term (e.g., monthly) returns are quite different from long-horizon (e.g., annual) autocorrelations. While the first-order autocorrelations of 6– and 12-month returns tend to be negative, the autocorrelations across twelve lags in monthly returns of the industry, size, and B/M portfolios are in general positive. Our results show that these portfolios exhibit return continuations when returns are measured on a monthly basis. Therefore, our finding appears to be consistent with the behavioral models, which suggest positive autocorrelation in stock returns.  相似文献   

15.
Large Bayesian VARs with stochastic volatility are increasingly used in empirical macroeconomics. The key to making these highly parameterized VARs useful is the use of shrinkage priors. We develop a family of priors that captures the best features of two prominent classes of shrinkage priors: adaptive hierarchical priors and Minnesota priors. Like adaptive hierarchical priors, these new priors ensure that only ‘small’ coefficients are strongly shrunk to zero, while ‘large’ coefficients remain intact. At the same time, these new priors can also incorporate many useful features of the Minnesota priors such as cross-variable shrinkage and shrinking coefficients on higher lags more aggressively. We introduce a fast posterior sampler to estimate BVARs with this family of priors—for a BVAR with 25 variables and 4 lags, obtaining 10,000 posterior draws takes about 3 min on a standard desktop computer. In a forecasting exercise, we show that these new priors outperform both adaptive hierarchical priors and Minnesota priors.  相似文献   

16.
针对多式联运,必经点等战时维修器材供应运输中的现实问题,考虑当各路段通行时间和运输方式转换时间为随机变量时,建立起了给定时间的最大置信水平约束下,以最小时间和费用为优化目标的随机机会约束模型,然后根据所建模型特点,基于随机模拟方法,设计了的模型求解的蚁群算法。算例结果表明:该模型和和算法具有有效性,能为确定战时交通运输路径提供有效的决策支持。  相似文献   

17.
We take as a starting point the existence of a joint distribution implied by different dynamic stochastic general equilibrium (DSGE) models, all of which are potentially misspecified. Our objective is to compare “true” joint distributions with ones generated by given DSGEs. This is accomplished via comparison of the empirical joint distributions (or confidence intervals) of historical and simulated time series. The tool draws on recent advances in the theory of the bootstrap, Kolmogorov type testing, and other work on the evaluation of DSGEs, aimed at comparing the second order properties of historical and simulated time series. We begin by fixing a given model as the “benchmark” model, against which all “alternative” models are to be compared. We then test whether at least one of the alternative models provides a more “accurate” approximation to the true cumulative distribution than does the benchmark model, where accuracy is measured in terms of distributional square error. Bootstrap critical values are discussed, and an illustrative example is given, in which it is shown that alternative versions of a standard DSGE model in which calibrated parameters are allowed to vary slightly perform equally well. On the other hand, there are stark differences between models when the shocks driving the models are assigned non-plausible variances and/or distributional assumptions.  相似文献   

18.
In spite of the current availability of numerous methods of cluster analysis, evaluating a clustering configuration is questionable without the definition of a true population structure, representing the ideal partition that clustering methods should try to approximate. A precise statistical notion of cluster, unshared by most of the mainstream methods, is provided by the density‐based approach, which assumes that clusters are associated to some specific features of the probability distribution underlying the data. The non‐parametric formulation of this approach, known as modal clustering, draws a correspondence between the groups and the modes of the density function. An appealing implication is that the ill‐specified task of cluster detection can be regarded to as a more circumscribed problem of estimation, and the number of clusters is also conceptually well defined. In this work, modal clustering is critically reviewed from both conceptual and operational standpoints. The main directions of current research are outlined as well as some challenges and directions of further research.  相似文献   

19.
A tutorial derivation of the reversible jump Markov chain Monte Carlo (MCMC) algorithm is given. Various examples illustrate how reversible jump MCMC is a general framework for Metropolis-Hastings algorithms where the proposal and the target distribution may have densities on spaces of varying dimension. It is finally discussed how reversible jump MCMC can be applied in genetics to compute the posterior distribution of the number, locations, effects, and genotypes of putative quantitative trait loci.  相似文献   

20.
Contrasting sharply with a recent trend in DSGE modeling, we propose a business cycle model where frictions and shocks are chosen with parsimony. The model emphasizes a few labor-market frictions and shocks to monetary policy and technology. The model, estimated from U.S. quarterly postwar data, accounts well for important differences in the serial correlation of the growth rates of aggregate quantities, the size of aggregate fluctuations and key comovements, including the correlation between hours and labor productivity. Despite its simplicity, the model offers an answer to the persistence problem (Chari et al., 2000) that does not rely on multiple frictions and adjustment lags or ad hoc backward-looking components. We conclude modern DSGE models need not embed large batteries of frictions and shocks to account for the salient features of postwar business cycles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号