首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian priors are often used to restrain the otherwise highly over‐parametrized vector autoregressive (VAR) models. The currently available Bayesian VAR methodology does not allow the user to specify prior beliefs about the unconditional mean, or steady state, of the system. This is unfortunate as the steady state is something that economists usually claim to know relatively well. This paper develops easily implemented methods for analyzing both stationary and cointegrated VARs, in reduced or structural form, with an informative prior on the steady state. We document that prior information on the steady state leads to substantial gains in forecasting accuracy on Swedish macro data. A second example illustrates the use of informative steady‐state priors in a cointegration model of the consumption‐wealth relationship in the USA. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
Vector autoregressions (VARs) with informative steady‐state priors are standard forecasting tools in empirical macroeconomics. This study proposes (i) an adaptive hierarchical normal‐gamma prior on steady states, (ii) a time‐varying steady‐state specification which accounts for structural breaks in the unconditional mean, and (iii) a generalization of steady‐state VARs with fat‐tailed and heteroskedastic error terms. Empirical analysis, based on a real‐time dataset of 14 macroeconomic variables, shows that, overall, the hierarchical steady‐state specifications materially improve out‐of‐sample forecasting for forecasting horizons longer than 1 year, while the time‐varying specifications generate superior forecasts for variables with significant changes in their unconditional mean.  相似文献   

3.
Empirical work in macroeconometrics has been mostly restricted to using vector autoregressions (VARs), even though there are strong theoretical reasons to consider general vector autoregressive moving averages (VARMAs). A number of articles in the last two decades have conjectured that this is because estimation of VARMAs is perceived to be challenging and proposed various ways to simplify it. Nevertheless, VARMAs continue to be largely dominated by VARs, particularly in terms of developing useful extensions. We address these computational challenges with a Bayesian approach. Specifically, we develop a Gibbs sampler for the basic VARMA, and demonstrate how it can be extended to models with time‐varying vector moving average (VMA) coefficients and stochastic volatility. We illustrate the methodology through a macroeconomic forecasting exercise. We show that in a class of models with stochastic volatility, VARMAs produce better density forecasts than VARs, particularly for short forecast horizons.  相似文献   

4.
In this paper we introduce a nonparametric estimation method for a large Vector Autoregression (VAR) with time‐varying parameters. The estimators and their asymptotic distributions are available in closed form. This makes the method computationally efficient and capable of handling information sets as large as those typically handled by factor models and Factor Augmented VARs. When applied to the problem of forecasting key macroeconomic variables, the method outperforms constant parameter benchmarks and compares well with large (parametric) Bayesian VARs with time‐varying parameters. The tool can also be used for structural analysis. As an example, we study the time‐varying effects of oil price shocks on sectoral U.S. industrial output. According to our results, the increased role of global demand in shaping oil price fluctuations largely explains the diminished recessionary effects of global energy price increases.  相似文献   

5.
We develop importance sampling methods for computing two popular Bayesian model comparison criteria, namely, the marginal likelihood and the deviance information criterion (DIC) for time‐varying parameter vector autoregressions (TVP‐VARs), where both the regression coefficients and volatilities are drifting over time. The proposed estimators are based on the integrated likelihood, which are substantially more reliable than alternatives. Using US data, we find overwhelming support for the TVP‐VAR with stochastic volatility compared to a conventional constant coefficients VAR with homoskedastic innovations. Most of the gains, however, appear to have come from allowing for stochastic volatility rather than time variation in the VAR coefficients or contemporaneous relationships. Indeed, according to both criteria, a constant coefficients VAR with stochastic volatility outperforms the more general model with time‐varying parameters.  相似文献   

6.
The likelihood of the parameters in structural macroeconomic models typically has non‐identification regions over which it is constant. When sufficiently diffuse priors are used, the posterior piles up in such non‐identification regions. Use of informative priors can lead to the opposite, so both can generate spurious inference. We propose priors/posteriors on the structural parameters that are implied by priors/posteriors on the parameters of an embedding reduced‐form model. An example of such a prior is the Jeffreys prior. We use it to conduct Bayesian limited‐information inference on the new Keynesian Phillips curve with a VAR reduced form for US data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Many recent papers in macroeconomics have used large vector autoregressions (VARs) involving 100 or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital to achieve reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayesian methods for large VARs that overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.  相似文献   

8.
Adding multivariate stochastic volatility of a flexible form to large vector autoregressions (VARs) involving over 100 variables has proved challenging owing to computational considerations and overparametrization concerns. The existing literature works with either homoskedastic models or smaller models with restrictive forms for the stochastic volatility. In this paper, we develop composite likelihood methods for large VARs with multivariate stochastic volatility. These involve estimating large numbers of parsimonious models and then taking a weighted average across these models. We discuss various schemes for choosing the weights. In our empirical work involving VARs of up to 196 variables, we show that composite likelihood methods forecast much better than the most popular large VAR approach, which is computationally practical in very high dimensions: the homoskedastic VAR with Minnesota prior. We also compare our methods to various popular approaches that allow for stochastic volatility using medium and small VARs involving up to 20 variables. We find our methods to forecast appreciably better than these as well.  相似文献   

9.
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases factor methods have been traditionally used, but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic dataset containing 168 variables. We find that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Typically, we find that the simple Minnesota prior forecasts well in medium and large VARs, which makes this prior attractive relative to computationally more demanding alternatives. Our empirical results show the importance of using forecast metrics based on the entire predictive density, instead of relying solely on those based on point forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
This paper reviews the long‐run event‐study debate by outlining the strengths and weakness of the most commonly used alternative techniques. The fist part of the discussion highlights that prior literature has failed to provide a single risk‐adjusted model of long‐run abnormal returns with no biases. Subsequently, the paper provides guidance on how one can choose among pertinent alternative techniques. As a conclusion, researchers ought to choose among alternative techniques after considering issues such as (i) the nature of dataset and market of interest, (ii) the event type (regulatory or corporate), (iii) returns’ time‐interval, (iv) association of the event with accounting data, (v) sample characteristics and prior evidence regarding similar events, as well as (vi) risk changes following the event. Robustness tests are essential, while the road for further research regarding the appropriate technique(s) is open.  相似文献   

11.
This paper merges two specifications recently developed in the forecasting literature: the MS‐MIDAS model (Guérin and Marcellino, 2013) and the factor‐MIDAS model (Marcellino and Schumacher, 2010). The MS‐factor MIDAS model that we introduce incorporates the information provided by a large data set consisting of mixed frequency variables and captures regime‐switching behaviours. Monte Carlo simulations show that this specification tracks the dynamics of the process and predicts the regime switches successfully, both in‐sample and out‐of‐sample. We apply this model to US data from 1959 to 2010 and properly detect recessions by exploiting the link between GDP growth and higher frequency financial variables.  相似文献   

12.
We propose imposing data‐driven identification constraints to alleviate the multimodality problem arising in the estimation of poorly identified dynamic stochastic general equilibrium models under non‐informative prior distributions. We also devise an iterative procedure based on the posterior density of the parameters for finding these constraints. An empirical application to the Smets and Wouters ( 2007 ) model demonstrates the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out‐of‐sample forecast comparisons as well as Bayes factors lend support to the constrained model.  相似文献   

13.
This paper develops methods for estimating and forecasting in Bayesian panel vector autoregressions of large dimensions with time‐varying parameters and stochastic volatility. We exploit a hierarchical prior that takes into account possible pooling restrictions involving both VAR coefficients and the error covariance matrix, and propose a Bayesian dynamic learning procedure that controls for various sources of model uncertainty. We tackle computational concerns by means of a simulation‐free algorithm that relies on analytical approximations to the posterior. We use our methods to forecast inflation rates in the eurozone and show that these forecasts are superior to alternative methods for large vector autoregressions.  相似文献   

14.
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper, we develop a new time varying parameter model which permits cointegration. We use a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP–VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving the Fisher effect.  相似文献   

15.
We quantify the impact of government spending shocks in the US. Thereby, we control for fiscal foresight, a specific limited information problem (LIP) by utilizing the narrative approach. Moreover, we surmount the generic LIP inherent in vector autoregressions (VARs) by a factor‐augmented VAR (FAVAR) approach. We find that a positive deficit‐financed defence shock raises output by more than in a VAR (e.g. 2.61 vs. 2.04 for peak multipliers). Furthermore, our evidence suggests that consumption is crowded in. These results are robust to variants of controlling for fiscal foresight and reveal the crucial role of the LIP in fiscal VARs.  相似文献   

16.
In the context of either Bayesian or classical sensitivity analyses of over‐parametrized models for incomplete categorical data, it is well known that prior‐dependence on posterior inferences of nonidentifiable parameters or that too parsimonious over‐parametrized models may lead to erroneous conclusions. Nevertheless, some authors either pay no attention to which parameters are nonidentifiable or do not appropriately account for possible prior‐dependence. We review the literature on this topic and consider simple examples to emphasize that in both inferential frameworks, the subjective components can influence results in nontrivial ways, irrespectively of the sample size. Specifically, we show that prior distributions commonly regarded as slightly informative or noninformative may actually be too informative for nonidentifiable parameters, and that the choice of over‐parametrized models may drastically impact the results, suggesting that a careful examination of their effects should be considered before drawing conclusions.  相似文献   

17.
In this paper, we derive restrictions for Granger noncausality in MS‐VAR models and show under what conditions a variable does not affect the forecast of the hidden Markov process. To assess the noncausality hypotheses, we apply Bayesian inference. The computational tools include a novel block Metropolis–Hastings sampling algorithm for the estimation of the underlying models. We analyze a system of monthly US data on money and income. The results of testing in MS‐VARs contradict those obtained with linear VARs: the money aggregate M1 helps in forecasting industrial production and in predicting the next period's state. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
We develop a sequential Monte Carlo (SMC) algorithm for estimating Bayesian dynamic stochastic general equilibrium (DSGE) models; wherein a particle approximation to the posterior is built iteratively through tempering the likelihood. Using two empirical illustrations consisting of the Smets and Wouters model and a larger news shock model we show that the SMC algorithm is better suited for multimodal and irregular posterior distributions than the widely used random walk Metropolis–Hastings algorithm. We find that a more diffuse prior for the Smets and Wouters model improves its marginal data density and that a slight modification of the prior for the news shock model leads to drastic changes in the posterior inference about the importance of news shocks for fluctuations in hours worked. Unlike standard Markov chain Monte Carlo (MCMC) techniques; the SMC algorithm is well suited for parallel computing. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
This paper constructs hybrid forecasts that combine forecasts from vector autoregressive (VAR) model(s) with both short- and long-term expectations from surveys. Specifically, we use the relative entropy to tilt one-step-ahead and long-horizon VAR forecasts to match the nowcasts and long-horizon forecasts from the Survey of Professional Forecasters. We consider a variety of VAR models, ranging from simple fixed-parameter to time-varying parameters. The results across models indicate meaningful gains in multi-horizon forecast accuracy relative to model forecasts that do not incorporate long-term survey conditions. Accuracy improvements are achieved for a range of variables, including those that are not tilted directly but are affected through spillover effects from tilted variables. The accuracy gains for hybrid inflation forecasts from simple VARs are substantial, statistically significant, and competitive to time-varying VARs, univariate benchmarks, and survey forecasts. We view our proposal as an indirect approach to accommodating structural change and moving end points.  相似文献   

20.
We propose a new periodic autoregressive model for seasonally observed time series, where the number of seasons can potentially be very large. The main novelty is that we collect the periodic coefficients in a second‐level stochastic model. This leads to a random‐coefficient periodic autoregression with a substantial reduction in the number of parameters to be estimated. We discuss representation, parameter estimation, and inference. An illustration for monthly growth rates of US industrial production shows the merits of the new model specification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号