首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Many recent papers in macroeconomics have used large vector autoregressions (VARs) involving 100 or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital to achieve reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayesian methods for large VARs that overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.  相似文献   

2.
We develop a novel Bayesian doubly adaptive elastic-net Lasso (DAELasso) approach for VAR shrinkage. DAELasso achieves variable selection and coefficient shrinkage in a data-based manner. It deals constructively with explanatory variables which tend to be highly collinear by encouraging the grouping effect. In addition, it also allows for different degrees of shrinkage for different coefficients. Rewriting the multivariate Laplace distribution as a scale mixture, we establish closed-form conditional posteriors that can be drawn from a Gibbs sampler. An empirical analysis shows that the forecast results produced by DAELasso and its variants are comparable to those from other popular Bayesian methods, which provides further evidence that the forecast performances of large and medium sized Bayesian VARs are relatively robust to prior choices, and, in practice, simple Minnesota types of priors can be more attractive than their complex and well-designed alternatives.  相似文献   

3.
A class of global-local hierarchical shrinkage priors for estimating large Bayesian vector autoregressions (BVARs) has recently been proposed. We question whether three such priors: Dirichlet-Laplace, Horseshoe, and Normal-Gamma, can systematically improve the forecast accuracy of two commonly used benchmarks (the hierarchical Minnesota prior and the stochastic search variable selection (SSVS) prior), when predicting key macroeconomic variables. Using small and large data sets, both point and density forecasts suggest that the answer is no. Instead, our results indicate that a hierarchical Minnesota prior remains a solid practical choice when forecasting macroeconomic variables. In light of existing optimality results, a possible explanation for our finding is that macroeconomic data is not sparse, but instead dense.  相似文献   

4.
We propose two data-based priors for vector error correction models. Both priors lead to highly automatic approaches which require only minimal user input. For the first one, we propose a reduced rank prior which encourages shrinkage towards a low-rank, row-sparse, and column-sparse long-run matrix. For the second one, we propose the use of the horseshoe prior, which shrinks all elements of the long-run matrix towards zero. Two empirical investigations reveal that Bayesian vector error correction (BVEC) models equipped with our proposed priors scale well to higher dimensions and forecast well. In comparison to VARs in first differences, they are able to exploit the information in the level variables. This turns out to be relevant to improve the forecasts for some macroeconomic variables. A simulation study shows that the BVEC with data-based priors possesses good frequentist estimation properties.  相似文献   

5.
In two recent articles, Sims (1988) and Sims and Uhlig (1988/1991) question the value of much of the ongoing literature on unit roots and stochastic trends. They characterize the seeds of this literature as ‘sterile ideas’, the application of nonstationary limit theory as ‘wrongheaded and unenlightening’, and the use of classical methods of inference as ‘unreasonable’ and ‘logically unsound’. They advocate in place of classical methods an explicit Bayesian approach to inference that utilizes a flat prior on the autoregressive coefficient. DeJong and Whiteman adopt a related Bayesian approach in a group of papers (1989a,b,c) that seek to re-evaluate the empirical evidence from historical economic time series. Their results appear to be conclusive in turning around the earlier, influential conclusions of Nelson and Plosser (1982) that most aggregate economic time series have stochastic trends. So far these criticisms of unit root econometrics have gone unanswered; the assertions about the impropriety of classical methods and the superiority of flat prior Bayesian methods have been unchallenged; and the empirical re-evaluation of evidence in support of stochastic trends has been left without comment. This paper breaks that silence and offers a new perspective. We challenge the methods, the assertions, and the conclusions of these articles on the Bayesian analysis of unit roots. Our approach is also Bayesian but we employ what are known in the statistical literature as objective ignorance priors in our analysis. These are developed in the paper to accommodate explicitly time series models in which no stationarity assumption is made. Ignorance priors are intended to represent a state of ignorance about the value of a parameter and in many models are very different from flat priors. We demonstrate that in time series models flat priors do not represent ignorance but are actually informative (sic) precisely because they neglect generically available information about how autoregressive coefficients influence observed time series characteristics. Contrary to their apparent intent, flat priors unwittingly bias inferences towards stationary and i.i.d. alternatives where they do represent ignorance, as in the linear regression model. This bias helps to explain the outcome of the simulation experiments in Sims and Uhlig and some of the empirical results of DeJong and Whiteman. Under both flat priors and ignorance priors this paper derives posterior distributions for the parameters in autoregressive models with a deterministic trend and an arbitrary number of lags. Marginal posterior distributions are obtained by using the Laplace approximation for multivariate integrals along the lines suggested by the author (Phillips, 1983) in some earlier work. The bias towards stationary models that arises from the use of flat priors is shown in our simulations to be substantial; and we conclude that it is unacceptably large in models with a fitted deterministic trend, for which the expected posterior probability of a stochastic trend is found to be negligible even though the true data generating mechanism has a unit root. Under ignorance priors, Bayesian inference is shown to accord more closely with the results of classical methods. An interesting outcome of our simulations and our empirical work is the bimodal Bayesian posterior, which demonstrates that Bayesian confidence sets can be disjoint, just like classical confidence intervals that are based on asymptotic theory. The paper concludes with an empirical application of our Bayesian methodology to the Nelson-Plosser series. Seven of the 14 series show evidence of stochastic trends under ignorance priors, whereas under flat priors on the coefficients all but three of the series appear trend stationary. The latter result corresponds closely with the conclusion reached by DeJong and Whiteman (1989b) (based on truncated flat priors). We argue that the DeJong-Whiteman inferences are biased towards trend stationarity through the use of flat priors on the autoregressive coefficients, and that their inferences for some of the series (especially stock prices) are fragile (i.e. not robust) not only to the prior but also to the lag length chosen in the time series specification.  相似文献   

6.
Vector autoregressions (VARs) with informative steady‐state priors are standard forecasting tools in empirical macroeconomics. This study proposes (i) an adaptive hierarchical normal‐gamma prior on steady states, (ii) a time‐varying steady‐state specification which accounts for structural breaks in the unconditional mean, and (iii) a generalization of steady‐state VARs with fat‐tailed and heteroskedastic error terms. Empirical analysis, based on a real‐time dataset of 14 macroeconomic variables, shows that, overall, the hierarchical steady‐state specifications materially improve out‐of‐sample forecasting for forecasting horizons longer than 1 year, while the time‐varying specifications generate superior forecasts for variables with significant changes in their unconditional mean.  相似文献   

7.
This paper develops a Bayesian variant of global vector autoregressive (B‐GVAR) models to forecast an international set of macroeconomic and financial variables. We propose a set of hierarchical priors and compare the predictive performance of B‐GVAR models in terms of point and density forecasts for one‐quarter‐ahead and four‐quarter‐ahead forecast horizons. We find that forecasts can be improved by employing a global framework and hierarchical priors which induce country‐specific degrees of shrinkage on the coefficients of the GVAR model. Forecasts from various B‐GVAR specifications tend to outperform forecasts from a naive univariate model, a global model without shrinkage on the parameters and country‐specific vector autoregressions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases factor methods have been traditionally used, but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic dataset containing 168 variables. We find that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Typically, we find that the simple Minnesota prior forecasts well in medium and large VARs, which makes this prior attractive relative to computationally more demanding alternatives. Our empirical results show the importance of using forecast metrics based on the entire predictive density, instead of relying solely on those based on point forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
Forecasting and turning point predictions in a Bayesian panel VAR model   总被引:2,自引:0,他引:2  
We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model, which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for hierarchical and for Minnesota-type priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.  相似文献   

10.
Adding multivariate stochastic volatility of a flexible form to large vector autoregressions (VARs) involving over 100 variables has proved challenging owing to computational considerations and overparametrization concerns. The existing literature works with either homoskedastic models or smaller models with restrictive forms for the stochastic volatility. In this paper, we develop composite likelihood methods for large VARs with multivariate stochastic volatility. These involve estimating large numbers of parsimonious models and then taking a weighted average across these models. We discuss various schemes for choosing the weights. In our empirical work involving VARs of up to 196 variables, we show that composite likelihood methods forecast much better than the most popular large VAR approach, which is computationally practical in very high dimensions: the homoskedastic VAR with Minnesota prior. We also compare our methods to various popular approaches that allow for stochastic volatility using medium and small VARs involving up to 20 variables. We find our methods to forecast appreciably better than these as well.  相似文献   

11.
Empirical work in macroeconometrics has been mostly restricted to using vector autoregressions (VARs), even though there are strong theoretical reasons to consider general vector autoregressive moving averages (VARMAs). A number of articles in the last two decades have conjectured that this is because estimation of VARMAs is perceived to be challenging and proposed various ways to simplify it. Nevertheless, VARMAs continue to be largely dominated by VARs, particularly in terms of developing useful extensions. We address these computational challenges with a Bayesian approach. Specifically, we develop a Gibbs sampler for the basic VARMA, and demonstrate how it can be extended to models with time‐varying vector moving average (VMA) coefficients and stochastic volatility. We illustrate the methodology through a macroeconomic forecasting exercise. We show that in a class of models with stochastic volatility, VARMAs produce better density forecasts than VARs, particularly for short forecast horizons.  相似文献   

12.
In this paper, we develop methods for estimation and forecasting in large time-varying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints, we draw on ideas from the dynamic model averaging literature which achieve reductions in the computational burden through the use forgetting factors. We then extend the TVP-VAR so that its dimension can change over time. For instance, we can have a large TVP-VAR as the forecasting model at some points in time, but a smaller TVP-VAR at others. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output and interest rates demonstrates the feasibility and usefulness of our approach.  相似文献   

13.
Models for the 12‐month‐ahead US rate of inflation, measured by the chain‐weighted consumer expenditure deflator, are estimated for 1974–98 and subsequent pseudo out‐of‐sample forecasting performance is examined. Alternative forecasting approaches for different information sets are compared with benchmark univariate autoregressive models, and substantial out‐performance is demonstrated including against Stock and Watson's unobserved components‐stochastic volatility model. Three key ingredients to the out‐performance are: including equilibrium correction component terms in relative prices; introducing nonlinearities to proxy state‐dependence in the inflation process and replacing the information criterion, commonly used in VARs to select lag length, with a ‘parsimonious longer lags’ parameterization. Forecast pooling or averaging also improves forecast performance.  相似文献   

14.
Neutral to right priors are generalizations of Dirichlet process priors that fit in well with right-censored data. These priors are naturally induced by increasing processes with independent increments which, in turn, may be viewed as priors for the cumulative hazard function. This connection together with the Lévy representation of independent increment processes provides a convenient means of studying properties of neutral to right priors.
This article is a review of the theoretical aspects of neutral to right priors and provides a number of new results on their structural properties. Notable among the new results are characterizations of neutral to right priors in terms of the posterior and the cumulative hazard function. We also show that neutral to right priors are of the following nature: Consistency of Bayes' estimates implies consistency of the posterior, and posterior-consistency for complete observations automatically yields posterior-consistency for right-censored data.  相似文献   

15.
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper, we develop a new time varying parameter model which permits cointegration. We use a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP–VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving the Fisher effect.  相似文献   

16.
In Bayesian analysis of vector autoregressive models, and especially in forecasting applications, the Minnesota prior of Litterman is frequently used. In many cases other prior distributions provide better forecasts and are preferable from a theoretical standpoint. Several of these priors require numerical methods in order to evaluate the posterior distribution. Different ways of implementing Monte Carlo integration are considered. It is found that Gibbs sampling performs as well as, or better, then importance sampling and that the Gibbs sampling algorithms are less adversely affected by model size. We also report on the forecasting performance of the different prior distributions. © 1997 by John Wiley & Sons, Ltd.  相似文献   

17.
This paper proposes LASSO estimation specific for panel vector autoregressive (PVAR) models. The penalty term allows for shrinkage for different lags, for shrinkage towards homogeneous coefficients across panel units, for penalization of lags of variables belonging to another cross-sectional unit, and for varying penalization across equations. The penalty parameters therefore build on time series and cross-sectional properties that are commonly found in PVAR models. Simulation results point towards advantages of using the proposed LASSO for PVAR models over ordinary least squares in terms of forecast accuracy. An empirical forecasting application including 20 countries supports these findings.  相似文献   

18.
Bayesian model selection with posterior probabilities and no subjective prior information is generally not possible because of the Bayes factors being ill‐defined. Using careful consideration of the parameter of interest in cointegration analysis and a re‐specification of the triangular model of Phillips (Econometrica, Vol. 59, pp. 283–306, 1991), this paper presents an approach that allows for Bayesian comparison of models of cointegration with ‘ignorance’ priors. Using the concept of Stiefel and Grassman manifolds, diffuse priors are specified on the dimension and direction of the cointegrating space. The approach is illustrated using a simple term structure of the interest rates model.  相似文献   

19.
How do entrepreneurs identify foreign market opportunities and how do they identify foreign market(s) and customers? We draw on the concepts of effectuation, improvisation, prior knowledge and networks to study the early internationalization of new ventures operating in the Irish Shellfish sector. We argue that the internationalization process was strongly influenced by two ‘resources to hand’: the entrepreneurs’ idiosyncratic prior knowledge and their prior social and business ties. We observe an effectuation logic and extensive improvisation in the internationalization process of these new ventures.  相似文献   

20.
Bayesian priors are often used to restrain the otherwise highly over‐parametrized vector autoregressive (VAR) models. The currently available Bayesian VAR methodology does not allow the user to specify prior beliefs about the unconditional mean, or steady state, of the system. This is unfortunate as the steady state is something that economists usually claim to know relatively well. This paper develops easily implemented methods for analyzing both stationary and cointegrated VARs, in reduced or structural form, with an informative prior on the steady state. We document that prior information on the steady state leads to substantial gains in forecasting accuracy on Swedish macro data. A second example illustrates the use of informative steady‐state priors in a cointegration model of the consumption‐wealth relationship in the USA. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号