首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 145 毫秒
1.
A complete procedure for calculating the joint predictive distribution of future observations based on the cointegrated vector autoregression is presented. The large degree of uncertainty in the choice of cointegration vectors is incorporated into the analysis via the prior distribution. This prior has the effect of weighing the predictive distributions based on the models with different cointegration vectors into an overall predictive distribution. The ideas of Litterman [Mimeo, Massachusetts Institute of Technology, 1980] are adopted for the prior on the short run dynamics of the process resulting in a prior which only depends on a few hyperparameters. A straightforward numerical evaluation of the predictive distribution based on Gibbs sampling is proposed. The prediction procedure is applied to a seven-variable system with a focus on forecasting Swedish inflation.  相似文献   

2.
This paper develops methods of Bayesian inference in a sample selection model. The main feature of this model is that the outcome variable is only partially observed. We first present a Gibbs sampling algorithm for a model in which the selection and outcome errors are normally distributed. The algorithm is then extended to analyze models that are characterized by nonnormality. Specifically, we use a Dirichlet process prior and model the distribution of the unobservables as a mixture of normal distributions with a random number of components. The posterior distribution in this model can simultaneously detect the presence of selection effects and departures from normality. Our methods are illustrated using some simulated data and an abstract from the RAND health insurance experiment.  相似文献   

3.
In this paper we use Monte Carlo study to investigate the finite sample properties of the Bayesian estimator obtained by the Gibbs sampler and its classical counterpart (i.e. the MLE) for a stochastic frontier model. Our Monte Carlo results show that the MSE performance of the estimates of Gibbs sampling are substantially better than that of the MLE.  相似文献   

4.
In this paper, Bayesian estimation of log odds ratios over R × C and 2 × 2 × K contingency tables is considered, which is practically reasonable in the presence of prior information. Likelihood functions for log odds ratios are derived for each table structure. A prior specification strategy is proposed. Posterior inferences are drawn using Gibbs sampling and Metropolis–Hastings algorithm. Two numerical examples are given to illustrate the matters argued.  相似文献   

5.
The stochastic search variable selection proposed by George and McCulloch (J Am Stat Assoc 88:881–889, 1993) is one of the most popular variable selection methods for linear regression models. Many efforts have been proposed in the literature to improve its computational efficiency. However, most of these efforts change its original Bayesian formulation, thus the comparisons are not fair. This work focuses on how to improve the computational efficiency of the stochastic search variable selection, but remains its original Bayesian formulation unchanged. The improvement is achieved by developing a new Gibbs sampling scheme different from that of George and McCulloch (J Am Stat Assoc 88:881–889, 1993). A remarkable feature of the proposed Gibbs sampling scheme is that, it samples the regression coefficients from their posterior distributions in a componentwise manner, so that the expensive computation of the inverse of the information matrix, which is involved in the algorithm of George and McCulloch (J Am Stat Assoc 88:881–889, 1993), can be avoided. Moreover, since the original Bayesian formulation remains unchanged, the stochastic search variable selection using the proposed Gibbs sampling scheme shall be as efficient as that of George and McCulloch (J Am Stat Assoc 88:881–889, 1993) in terms of assigning large probabilities to those promising models. Some numerical results support these findings.  相似文献   

6.
This paper studies performance of factor-based forecasts using differenced and nondifferenced data. Approximate variances of forecasting errors from the two forecasts are derived and compared. It is reported that the forecast using nondifferenced data tends to be more accurate than that using differenced data. This paper conducts simulations to compare root mean squared forecasting errors of the two competing forecasts. Simulation results indicate that forecasting using nondifferenced data performs better. The advantage of using nondifferenced data is more pronounced when a forecasting horizon is long and the number of factors is large. This paper applies the two competing forecasting methods to 68 I(1) monthly US macroeconomic variables across a range of forecasting horizons and sampling periods. We also provide detailed forecasting analysis on US inflation and industrial production. We find that forecasts using nondifferenced data tend to outperform those using differenced data.  相似文献   

7.
VAR FORECASTING USING BAYESIAN VARIABLE SELECTION   总被引:1,自引:0,他引:1  
This paper develops methods for automatic selection of variables in Bayesian vector autoregressions (VARs) using the Gibbs sampler. In particular, I provide computationally efficient algorithms for stochastic variable selection in generic linear and nonlinear models, as well as models of large dimensions. The performance of the proposed variable selection method is assessed in forecasting three major macroeconomic time series of the UK economy. Data‐based restrictions of VAR coefficients can help improve upon their unrestricted counterparts in forecasting, and in many cases they compare favorably to shrinkage estimators. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
Macroeconomic forecasting using structural factor analysis   总被引:1,自引:0,他引:1  
The use of a small number of underlying factors to summarize the information from a much larger set of information variables is one of the new frontiers in forecasting. In prior work, the estimated factors have not usually had a structural interpretation and the factors have not been chosen on a theoretical basis. In this paper we propose several variants of a general structural factor forecasting model, and use these to forecast certain key macroeconomic variables. We make the choice of factors more structurally meaningful by estimating factors from subsets of information variables, where these variables can be assigned to subsets on the basis of economic theory. We compare the forecasting performance of the structural factor forecasting model with that of a univariate AR model, a standard VAR model, and some non-structural factor forecasting models. The results suggest that our structural factor forecasting model performs significantly better in forecasting real activity variables, especially at short horizons.  相似文献   

9.
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases factor methods have been traditionally used, but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic dataset containing 168 variables. We find that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Typically, we find that the simple Minnesota prior forecasts well in medium and large VARs, which makes this prior attractive relative to computationally more demanding alternatives. Our empirical results show the importance of using forecast metrics based on the entire predictive density, instead of relying solely on those based on point forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper we compare classical econometrics, calibration and Bayesian inference in the context of the empirical analysis of factor demands. Our application is based on a popular flexible functional form for the firm's cost function, namely Diewert's Generalized Leontief function, and uses the well-known Berndt and Wood 1947–1971 KLEM data on the US manufacturing sector. We illustrate how the Gibbs sampling methodology can be easily used to calibrate parameter values and elasticities on the basis of previous knowledge from alternative studies on the same data, but with different functional forms. We rely on a system of mixed non-informative diffuse priors for some key parameters and informative tight priors for others. Within the Gibbs sampler, we employ rejection sampling to incorporate parameter restrictions, which are suggested by economic theory but in general rejected by economic data. Our results show that values of those parameters that relate to non-informative priors are almost equal to the standard SUR estimates, whereas differences come out for those parameters to which we have assigned informative priors. Moreover, discrepancies can be appreciated in some crucial parameter estimates obtained with or without rejection sampling.  相似文献   

11.
The main goal of both Bayesian model selection and classical hypotheses testing is to make inferences with respect to the state of affairs in a population of interest. The main differences between both approaches are the explicit use of prior information by Bayesians, and the explicit use of null distributions by the classicists. Formalization of prior information in prior distributions is often difficult. In this paper two practical approaches (encompassing priors and training data) to specify prior distributions will be presented. The computation of null distributions is relatively easy. However, as will be illustrated, a straightforward interpretation of the resulting p-values is not always easy. Bayesian model selection can be used to compute posterior probabilities for each of a number of competing models. This provides an alternative for the currently prevalent testing of hypotheses using p-values. Both approaches will be compared and illustrated using case studies. Each case study fits in the framework of the normal linear model, that is, analysis of variance and multiple regression.  相似文献   

12.
This paper compares the mixed-data sampling (MIDAS) and mixed-frequency VAR (MF-VAR) approaches to model specification in the presence of mixed-frequency data, e.g. monthly and quarterly series. MIDAS leads to parsimonious models which are based on exponential lag polynomials for the coefficients, whereas MF-VAR does not restrict the dynamics and can therefore suffer from the curse of dimensionality. However, if the restrictions imposed by MIDAS are too stringent, the MF-VAR can perform better. Hence, it is difficult to rank MIDAS and MF-VAR a priori, and their relative rankings are better evaluated empirically. In this paper, we compare their performances in a case which is relevant for policy making, namely nowcasting and forecasting quarterly GDP growth in the euro area on a monthly basis, using a set of about 20 monthly indicators. It turns out that the two approaches are more complements than substitutes, since MIDAS tends to perform better for horizons up to four to five months, whereas MF-VAR performs better for longer horizons, up to nine months.  相似文献   

13.
We employ datasets for seven developed economies and consider four classes of multivariate forecasting models in order to extend and enhance the empirical evidence in the macroeconomic forecasting literature. The evaluation considers forecasting horizons of between one quarter and two years ahead. We find that the structural model, a medium-sized DSGE model, provides accurate long-horizon US and UK inflation forecasts. We strike a balance between being comprehensive and producing clear messages by applying meta-analysis regressions to 2,976 relative accuracy comparisons that vary with the forecasting horizon, country, model class and specification, number of predictors, and evaluation period. For point and density forecasting of GDP growth and inflation, we find that models with large numbers of predictors do not outperform models with 13–14 hand-picked predictors. Factor-augmented models and equal-weighted combinations of single-predictor mixed-data sampling regressions are a better choice for dealing with large numbers of predictors than Bayesian VARs.  相似文献   

14.
We introduce a mixed-frequency score-driven dynamic model for multiple time series where the score contributions from high-frequency variables are transformed by means of a mixed-data sampling weighting scheme. The resulting dynamic model delivers a flexible and easy-to-implement framework for the forecasting of low-frequency time series variables through the use of timely information from high-frequency variables. We verify the in-sample and out-of-sample performances of the model in an empirical study on the forecasting of U.S. headline inflation and GDP growth. In particular, we forecast monthly headline inflation using daily oil prices and quarterly GDP growth using a measure of financial risk. The forecasting results and other findings are promising. Our proposed score-driven dynamic model with mixed-data sampling weighting outperforms competing models in terms of both point and density forecasts.  相似文献   

15.
This paper presents Bayesian inference procedures for the continuous time mover–stayer model applied to labour market transition data collected in discrete time. These methods allow us to derive the probability of embeddability of the discrete‐time modelling with the continuous‐time one. A special emphasis is put on two alternative procedures, namely the importance sampling algorithm and a new Gibbs sampling algorithm. Transition intensities, proportions of stayers and functions of these parameters are then estimated with the Gibbs sampling algorithm for individual transition data coming from the French Labour Force Surveys collected over the period 1986–2000. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
Copulas provide an attractive approach to the construction of multivariate distributions with flexible marginal distributions and different forms of dependences. Of particular importance in many areas is the possibility of forecasting the tail-dependences explicitly. Most of the available approaches are only able to estimate tail-dependences and correlations via nuisance parameters, and cannot be used for either interpretation or forecasting. We propose a general Bayesian approach for modeling and forecasting tail-dependences and correlations as explicit functions of covariates, with the aim of improving the copula forecasting performance. The proposed covariate-dependent copula model also allows for Bayesian variable selection from among the covariates of the marginal models, as well as the copula density. The copulas that we study include the Joe-Clayton copula, the Clayton copula, the Gumbel copula and the Student’s t-copula. Posterior inference is carried out using an efficient MCMC simulation method. Our approach is applied to both simulated data and the S&P 100 and S&P 600 stock indices. The forecasting performance of the proposed approach is compared with those of other modeling strategies based on log predictive scores. A value-at-risk evaluation is also performed for the model comparisons.  相似文献   

17.
This paper proposes Bayesian forecasting in a vector autoregression using a democratic prior. This prior is chosen to match the predictions of survey respondents. In particular, the unconditional mean for each series in the vector autoregression is centered around long‐horizon survey forecasts. Heavy shrinkage toward the democratic prior is found to give good real‐time predictions of a range of macroeconomic variables, as these survey projections are good at quickly capturing endpoint shifts. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Empirical work in macroeconometrics has been mostly restricted to using vector autoregressions (VARs), even though there are strong theoretical reasons to consider general vector autoregressive moving averages (VARMAs). A number of articles in the last two decades have conjectured that this is because estimation of VARMAs is perceived to be challenging and proposed various ways to simplify it. Nevertheless, VARMAs continue to be largely dominated by VARs, particularly in terms of developing useful extensions. We address these computational challenges with a Bayesian approach. Specifically, we develop a Gibbs sampler for the basic VARMA, and demonstrate how it can be extended to models with time‐varying vector moving average (VMA) coefficients and stochastic volatility. We illustrate the methodology through a macroeconomic forecasting exercise. We show that in a class of models with stochastic volatility, VARMAs produce better density forecasts than VARs, particularly for short forecast horizons.  相似文献   

19.
According to previous literature, we define randomized inverse sampling for comparing two treatments with respect to a binary response as the sampling that stops when a total fixed number of successes, irrespective of the treatments, are observed. We have obtained elsewhere the asymptotic distributions for the counting variables involved and have shown them to be equivalent to the corresponding asymptotic distributions for multinomial sampling. In this paper, we start deriving the same basic results using different techniques, and we then show how they give rise to genuinely novel procedures when translated into finite sample approximations. As the main example, a novel confidence interval for the logarithm of the odds ratio of two success probabilities can be constructed in the case of comparative randomized inverse sampling. Some advantages over the standard multinomial sampling in terms of coverage probabilities are visible when no adjustment for cells with zero counts is applied; otherwise, the two sampling schemes appear to be fairly equivalent. This is a reassurance that under certain circumstances, inverse sampling can be safely chosen over more traditional sampling schemes.  相似文献   

20.
In this paper we, like several studies in the recent literature, employ a Bayesian approach to estimation and inference in models with endogeneity concerns by imposing weaker prior assumptions than complete excludability. When allowing for instrument imperfection of this type, the model is only partially identified, and as a consequence standard estimates obtained from the Gibbs simulations can be unacceptably imprecise. We thus describe a substantially improved ‘semi‐analytic’ method for calculating parameter marginal posteriors of interest that only require use of the well‐mixing simulations associated with the identifiable model parameters and the form of the conditional prior. Our methods are also applied in an illustrative application involving the impact of body mass index on earnings. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号