首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Many recent papers in macroeconomics have used large vector autoregressions (VARs) involving 100 or more dependent variables. With so many parameters to estimate, Bayesian prior shrinkage is vital to achieve reasonable results. Computational concerns currently limit the range of priors used and render difficult the addition of empirically important features such as stochastic volatility to the large VAR. In this paper, we develop variational Bayesian methods for large VARs that overcome the computational hurdle and allow for Bayesian inference in large VARs with a range of hierarchical shrinkage priors and with time-varying volatilities. We demonstrate the computational feasibility and good forecast performance of our methods in an empirical application involving a large quarterly US macroeconomic data set.  相似文献   

2.
Large Bayesian VARs with stochastic volatility are increasingly used in empirical macroeconomics. The key to making these highly parameterized VARs useful is the use of shrinkage priors. We develop a family of priors that captures the best features of two prominent classes of shrinkage priors: adaptive hierarchical priors and Minnesota priors. Like adaptive hierarchical priors, these new priors ensure that only ‘small’ coefficients are strongly shrunk to zero, while ‘large’ coefficients remain intact. At the same time, these new priors can also incorporate many useful features of the Minnesota priors such as cross-variable shrinkage and shrinking coefficients on higher lags more aggressively. We introduce a fast posterior sampler to estimate BVARs with this family of priors—for a BVAR with 25 variables and 4 lags, obtaining 10,000 posterior draws takes about 3 min on a standard desktop computer. In a forecasting exercise, we show that these new priors outperform both adaptive hierarchical priors and Minnesota priors.  相似文献   

3.
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases factor methods have been traditionally used, but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic dataset containing 168 variables. We find that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Typically, we find that the simple Minnesota prior forecasts well in medium and large VARs, which makes this prior attractive relative to computationally more demanding alternatives. Our empirical results show the importance of using forecast metrics based on the entire predictive density, instead of relying solely on those based on point forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Vector autoregressions (VARs) with informative steady‐state priors are standard forecasting tools in empirical macroeconomics. This study proposes (i) an adaptive hierarchical normal‐gamma prior on steady states, (ii) a time‐varying steady‐state specification which accounts for structural breaks in the unconditional mean, and (iii) a generalization of steady‐state VARs with fat‐tailed and heteroskedastic error terms. Empirical analysis, based on a real‐time dataset of 14 macroeconomic variables, shows that, overall, the hierarchical steady‐state specifications materially improve out‐of‐sample forecasting for forecasting horizons longer than 1 year, while the time‐varying specifications generate superior forecasts for variables with significant changes in their unconditional mean.  相似文献   

5.
We propose two data-based priors for vector error correction models. Both priors lead to highly automatic approaches which require only minimal user input. For the first one, we propose a reduced rank prior which encourages shrinkage towards a low-rank, row-sparse, and column-sparse long-run matrix. For the second one, we propose the use of the horseshoe prior, which shrinks all elements of the long-run matrix towards zero. Two empirical investigations reveal that Bayesian vector error correction (BVEC) models equipped with our proposed priors scale well to higher dimensions and forecast well. In comparison to VARs in first differences, they are able to exploit the information in the level variables. This turns out to be relevant to improve the forecasts for some macroeconomic variables. A simulation study shows that the BVEC with data-based priors possesses good frequentist estimation properties.  相似文献   

6.
In DeJong and Whiteman (1991a), we concluded that 11 of the 14 macroeconomic time-series originally studied by Nelson and Plosser (1982) supported trend-stationarity. Phillips (1991) criticizes this inference, claiming that our procedure is biased against integration, and that our results are sensitive to model and prior specification. However, Phillips' alternative models and priors bias his results in favour of integration; despite these biases, Phillips' own findings indicate that the data provide the greatest relative support to trend-stationarity. This result is similar to our own (1989, 1990, 1991b) findings concerning the sensitivity of our results; the trend-stationarity inference is remarkably robust.  相似文献   

7.
This paper develops a Bayesian variant of global vector autoregressive (B‐GVAR) models to forecast an international set of macroeconomic and financial variables. We propose a set of hierarchical priors and compare the predictive performance of B‐GVAR models in terms of point and density forecasts for one‐quarter‐ahead and four‐quarter‐ahead forecast horizons. We find that forecasts can be improved by employing a global framework and hierarchical priors which induce country‐specific degrees of shrinkage on the coefficients of the GVAR model. Forecasts from various B‐GVAR specifications tend to outperform forecasts from a naive univariate model, a global model without shrinkage on the parameters and country‐specific vector autoregressions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
We develop a novel Bayesian doubly adaptive elastic-net Lasso (DAELasso) approach for VAR shrinkage. DAELasso achieves variable selection and coefficient shrinkage in a data-based manner. It deals constructively with explanatory variables which tend to be highly collinear by encouraging the grouping effect. In addition, it also allows for different degrees of shrinkage for different coefficients. Rewriting the multivariate Laplace distribution as a scale mixture, we establish closed-form conditional posteriors that can be drawn from a Gibbs sampler. An empirical analysis shows that the forecast results produced by DAELasso and its variants are comparable to those from other popular Bayesian methods, which provides further evidence that the forecast performances of large and medium sized Bayesian VARs are relatively robust to prior choices, and, in practice, simple Minnesota types of priors can be more attractive than their complex and well-designed alternatives.  相似文献   

9.
The likelihood of the parameters in structural macroeconomic models typically has non‐identification regions over which it is constant. When sufficiently diffuse priors are used, the posterior piles up in such non‐identification regions. Use of informative priors can lead to the opposite, so both can generate spurious inference. We propose priors/posteriors on the structural parameters that are implied by priors/posteriors on the parameters of an embedding reduced‐form model. An example of such a prior is the Jeffreys prior. We use it to conduct Bayesian limited‐information inference on the new Keynesian Phillips curve with a VAR reduced form for US data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
We explore a new approach to the forecasting of macroeconomic variables based on a dynamic factor state space analysis. Key economic variables are modeled jointly with principal components from a large time series panel of macroeconomic indicators using a multivariate unobserved components time series model. When the key economic variables are observed at a low frequency and the panel of macroeconomic variables is at a high frequency, we can use our approach for both nowcasting and forecasting purposes. Given a dynamic factor model as the data generation process, we provide Monte Carlo evidence of the finite-sample justification of our parsimonious and feasible approach. We also provide empirical evidence for a US macroeconomic dataset. The unbalanced panel contains quarterly and monthly variables. The forecasting accuracy is measured against a set of benchmark models. We conclude that our dynamic factor state space analysis can lead to higher levels of forecasting precision when the panel size and time series dimensions are moderate.  相似文献   

11.
We propose a natural conjugate prior for the instrumental variables regression model. The prior is a natural conjugate one since the marginal prior and posterior of the structural parameter have the same functional expressions which directly reveal the update from prior to posterior. The Jeffreys prior results from a specific setting of the prior parameters and results in a marginal posterior of the structural parameter that has an identical functional form as the sampling density of the limited information maximum likelihood estimator. We construct informative priors for the Angrist–Krueger [1991. Does compulsory school attendance affect schooling and earnings? Quarterly Journal of Economics 106, 979–1014] data and show that the marginal posterior of the return on education in the US coincides with the marginal posterior from the Southern region when we use the Jeffreys prior. This result occurs since the instruments are the strongest in the Southern region and the posterior using the Jeffreys prior, identical to maximum likelihood, focusses on the strongest available instruments. We construct informative priors for the other regions that make their posteriors of the return on education similar to that of the US and the Southern region. These priors show the amount of prior information needed to obtain comparable results for all regions.  相似文献   

12.
This Economic Outlook presents a forecast produced on a large, with more than 700 variables, econometric model of the UK developed over the last 25 years by researchers at the Centre for Economic Forecasting. As macroeconomic theory is not static, new equations are regularly incorporated into the model framework to replace, or augment, older ones with the aim of improving our understanding of the UK economy. The best way to assess the impact of changes to the model is to perform a series of simulation exercises and compare the results of the system as a whole with established theoretical priors. In this paper we discuss the most recent changes to the structure of the model and describe the results from four simulation exercises.  相似文献   

13.
In this paper we propose an approach to both estimate and select unknown smooth functions in an additive model with potentially many functions. Each function is written as a linear combination of basis terms, with coefficients regularized by a proper linearly constrained Gaussian prior. Given any potentially rank deficient prior precision matrix, we show how to derive linear constraints so that the corresponding effect is identified in the additive model. This allows for the use of a wide range of bases and precision matrices in priors for regularization. By introducing indicator variables, each constrained Gaussian prior is augmented with a point mass at zero, thus allowing for function selection. Posterior inference is calculated using Markov chain Monte Carlo and the smoothness in the functions is both the result of shrinkage through the constrained Gaussian prior and model averaging. We show how using non-degenerate priors on the shrinkage parameters enables the application of substantially more computationally efficient sampling schemes than would otherwise be the case. We show the favourable performance of our approach when compared to two contemporary alternative Bayesian methods. To highlight the potential of our approach in high-dimensional settings we apply it to estimate two large seemingly unrelated regression models for intra-day electricity load. Both models feature a variety of different univariate and bivariate functions which require different levels of smoothing, and where component selection is meaningful. Priors for the error disturbance covariances are selected carefully and the empirical results provide a substantive contribution to the electricity load modelling literature in their own right.  相似文献   

14.
Due to weaknesses in traditional tests, a Bayesian approach is developed to investigate whether unit roots exist in macroeconomic time-series. Bayesian posterior odds comparing unit root models to stationary and trend-stationary alternatives are calculated using informative priors. Two classes of reference priors which are informative but require minimal subjective prior input are used. In this sense the Bayesian unit root tests developed here are objective. Bayesian procedures are carried out on the Nelson–Plosser and Shiller data sets as well as on generated data. The conclusion is that the failure of classical procedures to reject the unit root hypothesis is not necessarily proof that a unit root is present with high probability.  相似文献   

15.
Adding multivariate stochastic volatility of a flexible form to large vector autoregressions (VARs) involving over 100 variables has proved challenging owing to computational considerations and overparametrization concerns. The existing literature works with either homoskedastic models or smaller models with restrictive forms for the stochastic volatility. In this paper, we develop composite likelihood methods for large VARs with multivariate stochastic volatility. These involve estimating large numbers of parsimonious models and then taking a weighted average across these models. We discuss various schemes for choosing the weights. In our empirical work involving VARs of up to 196 variables, we show that composite likelihood methods forecast much better than the most popular large VAR approach, which is computationally practical in very high dimensions: the homoskedastic VAR with Minnesota prior. We also compare our methods to various popular approaches that allow for stochastic volatility using medium and small VARs involving up to 20 variables. We find our methods to forecast appreciably better than these as well.  相似文献   

16.
In two recent articles, Sims (1988) and Sims and Uhlig (1988/1991) question the value of much of the ongoing literature on unit roots and stochastic trends. They characterize the seeds of this literature as ‘sterile ideas’, the application of nonstationary limit theory as ‘wrongheaded and unenlightening’, and the use of classical methods of inference as ‘unreasonable’ and ‘logically unsound’. They advocate in place of classical methods an explicit Bayesian approach to inference that utilizes a flat prior on the autoregressive coefficient. DeJong and Whiteman adopt a related Bayesian approach in a group of papers (1989a,b,c) that seek to re-evaluate the empirical evidence from historical economic time series. Their results appear to be conclusive in turning around the earlier, influential conclusions of Nelson and Plosser (1982) that most aggregate economic time series have stochastic trends. So far these criticisms of unit root econometrics have gone unanswered; the assertions about the impropriety of classical methods and the superiority of flat prior Bayesian methods have been unchallenged; and the empirical re-evaluation of evidence in support of stochastic trends has been left without comment. This paper breaks that silence and offers a new perspective. We challenge the methods, the assertions, and the conclusions of these articles on the Bayesian analysis of unit roots. Our approach is also Bayesian but we employ what are known in the statistical literature as objective ignorance priors in our analysis. These are developed in the paper to accommodate explicitly time series models in which no stationarity assumption is made. Ignorance priors are intended to represent a state of ignorance about the value of a parameter and in many models are very different from flat priors. We demonstrate that in time series models flat priors do not represent ignorance but are actually informative (sic) precisely because they neglect generically available information about how autoregressive coefficients influence observed time series characteristics. Contrary to their apparent intent, flat priors unwittingly bias inferences towards stationary and i.i.d. alternatives where they do represent ignorance, as in the linear regression model. This bias helps to explain the outcome of the simulation experiments in Sims and Uhlig and some of the empirical results of DeJong and Whiteman. Under both flat priors and ignorance priors this paper derives posterior distributions for the parameters in autoregressive models with a deterministic trend and an arbitrary number of lags. Marginal posterior distributions are obtained by using the Laplace approximation for multivariate integrals along the lines suggested by the author (Phillips, 1983) in some earlier work. The bias towards stationary models that arises from the use of flat priors is shown in our simulations to be substantial; and we conclude that it is unacceptably large in models with a fitted deterministic trend, for which the expected posterior probability of a stochastic trend is found to be negligible even though the true data generating mechanism has a unit root. Under ignorance priors, Bayesian inference is shown to accord more closely with the results of classical methods. An interesting outcome of our simulations and our empirical work is the bimodal Bayesian posterior, which demonstrates that Bayesian confidence sets can be disjoint, just like classical confidence intervals that are based on asymptotic theory. The paper concludes with an empirical application of our Bayesian methodology to the Nelson-Plosser series. Seven of the 14 series show evidence of stochastic trends under ignorance priors, whereas under flat priors on the coefficients all but three of the series appear trend stationary. The latter result corresponds closely with the conclusion reached by DeJong and Whiteman (1989b) (based on truncated flat priors). We argue that the DeJong-Whiteman inferences are biased towards trend stationarity through the use of flat priors on the autoregressive coefficients, and that their inferences for some of the series (especially stock prices) are fragile (i.e. not robust) not only to the prior but also to the lag length chosen in the time series specification.  相似文献   

17.
In this paper, we introduce a threshold stochastic volatility model with explanatory variables. The Bayesian method is considered in estimating the parameters of the proposed model via the Markov chain Monte Carlo (MCMC) algorithm. Gibbs sampling and Metropolis–Hastings sampling methods are used for drawing the posterior samples of the parameters and the latent variables. In the simulation study, the accuracy of the MCMC algorithm, the sensitivity of the algorithm for model assumptions, and the robustness of the posterior distribution under different priors are considered. Simulation results indicate that our MCMC algorithm converges fast and that the posterior distribution is robust under different priors and model assumptions. A real data example was analyzed to explain the asymmetric behavior of stock markets.  相似文献   

18.
In the context of predicting the term structure of interest rates, we explore the marginal predictive content of real-time macroeconomic diffusion indexes extracted from a “data rich” real-time data set, when used in dynamic Nelson–Siegel (NS) models of the variety discussed in Svensson (NBER technical report, 1994; NSS) and Diebold and Li (Journal of Econometrics, 2006, 130, 337–364; DNS). Our diffusion indexes are constructed using principal component analysis with both targeted and untargeted predictors, with targeting done using the lasso and elastic net. Our findings can be summarized as follows. First, the marginal predictive content of real-time diffusion indexes is significant for the preponderance of the individual models that we examine. The exception to this finding is the post “Great Recession” period. Second, forecast combinations that include only yield variables result in our most accurate predictions, for most sample periods and maturities. In this case, diffusion indexes do not have marginal predictive content for yields and do not seem to reflect unspanned risks. This points to the continuing usefulness of DNS and NSS models that are purely yield driven. Finally, we find that the use of fully revised macroeconomic data may have an important confounding effect upon results obtained when forecasting yields, as prior research has indicated that diffusion indexes are often useful for predicting yields when constructed using fully revised data, regardless of whether forecast combination is used, or not. Nevertheless, our findings also underscore the potential importance of using machine learning, data reduction, and shrinkage methods in contexts such as term structure modeling.  相似文献   

19.
In this paper, we use survey data to analyze the accuracy, unbiasedness and efficiency of professional macroeconomic forecasts. We analyze a large panel of individual forecasts that has not previously been analyzed in the literature. We provide evidence on the properties of forecasts for all G7-countries and for four different macroeconomic variables. Our results show a high degree of dispersion of forecast accuracy across forecasters. We also find that there are large differences in the performances of forecasters, not only across countries but also across different macroeconomic variables. In general, the forecasts tend to be biased in situations where the forecasters have to learn about large structural shocks or gradual changes in the trend of a variable. Furthermore, while a sizable fraction of forecasters seem to smooth their GDP forecasts significantly, this does not apply to forecasts made for other macroeconomic variables.  相似文献   

20.
By using a dynamic factor model, we can substantially improve the reliability of real-time output gap estimates for the U.S. economy. First, we use a factor model to extract a series for the common component in GDP from a large panel of monthly real-time macroeconomic variables. This series is immune to revisions to the extent that revisions are due to unbiased measurement errors or idiosyncratic news. Second, our model is able to handle the unbalanced arrival of the data. This yields favorable nowcasting properties and thus starting conditions for the filtering of data into a trend and deviations from a trend. Combined with the method of augmenting data with forecasts prior to filtering, this greatly reduces the end-of-sample imprecision in the gap estimate. The increased precision has economic importance for real-time policy decisions and improves real-time inflation forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号