首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 781 毫秒
1.
We employ a 10-variable dynamic structural general equilibrium model to forecast the US real house price index as well as its downturn in 2006:Q2. We also examine various Bayesian and classical time-series models in our forecasting exercise to compare to the dynamic stochastic general equilibrium model, estimated using Bayesian methods. In addition to standard vector-autoregressive and Bayesian vector autoregressive models, we also include the information content of either 10 or 120 quarterly series in some models to capture the influence of fundamentals. We consider two approaches for including information from large data sets — extracting common factors (principle components) in factor-augmented vector autoregressive or Bayesian factor-augmented vector autoregressive models as well as Bayesian shrinkage in a large-scale Bayesian vector autoregressive model. We compare the out-of-sample forecast performance of the alternative models, using the average root mean squared error for the forecasts. We find that the small-scale Bayesian-shrinkage model (10 variables) outperforms the other models, including the large-scale Bayesian-shrinkage model (120 variables). In addition, when we use simple average forecast combinations, the combination forecast using the 10 best atheoretical models produces the minimum RMSEs compared to each of the individual models, followed closely by the combination forecast using the 10 atheoretical models and the DSGE model. Finally, we use each model to forecast the downturn point in 2006:Q2, using the estimated model through 2005:Q2. Only the dynamic stochastic general equilibrium model actually forecasts a downturn with any accuracy, suggesting that forward-looking microfounded dynamic stochastic general equilibrium models of the housing market may prove crucial in forecasting turning points.  相似文献   

2.
This article seeks to evaluate the appropriateness of a variety of existing forecasting techniques (17 methods) at providing accurate and statistically significant forecasts for gold price. We report the results from the nine most competitive techniques. Special consideration is given to the ability of these techniques to provide forecasts which outperforms the random walk (RW) as we noticed that certain multivariate models (which included prices of silver, platinum, palladium and rhodium, besides gold) were also unable to outperform the RW in this case. Interestingly, the results show that none of the forecasting techniques are able to outperform the RW at horizons of 1 and 9 steps ahead, and on average, the exponential smoothing model is seen providing the best forecasts in terms of the lowest root mean squared error over the 24-month forecasting horizons. Moreover, we find that the univariate models used in this article are able to outperform the Bayesian autoregression and Bayesian vector autoregressive models, with exponential smoothing reporting statistically significant results in comparison with the former models, and classical autoregressive and the vector autoregressive models in most cases.  相似文献   

3.
The purpose of this paper is to analyze and compare the results of applying classical and Bayesian methods to testing for a unit root in time series with a single endogenous structural break. We utilize a data set of macroeconomic time series for the Mexican economy similar to the Nelson–Plosser one. Under both approaches, we make use of innovational outlier models allowing for an unknown break in the trend function. Classical inference relies on bootstrapped critical values, in order to make inference comparable to the finite sample Bayesian one. Results from both approaches are discussed and compared.  相似文献   

4.
This paper tests whether housing prices in the five segments of the South African housing market, namely large-middle, medium-middle, small-middle, luxury and affordable, exhibit non-linearity based on smooth transition autoregressive (STAR) models estimated using quarterly data from 1970:Q2 to 2009:Q3. Findings point to an overwhelming evidence of non-linearity in these five segments based on in-sample evaluation of the linear and non-linear models. We next provide further support for non-linearity by comparing one- to four-quarters-ahead out-of-sample forecasts of the non-linear time series model with those of the classical and Bayesian versions of the linear autoregressive (AR) models for each of these segments, for the out-of-sample horizon 2001:Q1 to 2009:Q3, using the in-sample period 1970:Q2 to 2000:Q4. Our results indicate that barring the one-, two and four-step(s)-ahead forecasts of the small segment, the non-linear model always outperforms the linear models. In addition, given the existence of strong causal relationship amongst the house prices of the five segments, the multivariate versions of the linear (classical and Bayesian) and STAR (MSTAR) models were also estimated. The MSTAR always outperformed the best performing univariate and multivariate linear models. Thus, our results highlight the importance of accounting for non-linearity, as well as the possible interrelationship amongst the variables under consideration, especially for forecasting.  相似文献   

5.
A Bayesian posterior odds approach is used to distinguish between different error correlation structures in dynamic linear regression models. Recent classical results are provided with a Bayesian interpretation, and a small empirical example illustrates the approach.  相似文献   

6.
The production values of the integrated circuit industry has the following attributes, short product life cycle, numerous influencing factors on the market, and rapid changing of technology. These features obstruct the precision of forecasting the outputs of integrated circuit industry using the traditional statistical methods. The grey forecast model can obviously conquer these difficulties with a small sample set and ambiguity of available information. This study evaluates original and Bayesian grey forecast models for the integrated circuit industry. Bayesian method uses the technique of Markov Chain Monte Carlo to estimate the parameters for grey differential function. The predictive value of integrated circuit in Taiwan was evaluated along with mean absolute percentage error. Various parameters and efficiency of three forecast models were compared and summary outcomes were reported. Meanwhile, the Bayesian grey model was the most accurate one among these models.  相似文献   

7.
This paper studies whether and how US shocks impact the OECD countries in the case of a simulated crisis. Using Bayesian estimation methods we extract constrained factors (global, country and variable type specific) from a sample of 153 economic and financial OECD variables from 1980–2008. These factors are the transmission channels through which national shocks spread to other countries, as in a pandemic. The Bayesian interpretable factors are used to estimate FAVAR models. Our main findings suggest that differences exist in the contagion effects. This implies that no generalizations can be made for OECD countries even of equal economic size and in the same geographic region. In addition, our results show that a large portion of the variance of domestic economic variables is explained by global factors; and that the interest rate shock appears to play an important role in the spillover mechanism from the United States to the rest of the world. More precisely, Australia, the United Kingdom and Scandinavian countries appear to be most sensitive to the US shocks.  相似文献   

8.
This article uses a small set of variables – real GDP, the inflation rate and the short-term interest rate – and a rich set of models – atheoretical (time series) and theoretical (structural), linear and nonlinear, as well as classical and Bayesian models – to consider whether we could have predicted the recent downturn of the US real GDP. Comparing the performance of the models to the benchmark random-walk model by root mean-square errors, the two structural (theoretical) models, especially the nonlinear model, perform well on average across all forecast horizons in our ex post, out-of-sample forecasts, although at specific forecast horizons certain nonlinear atheoretical models perform the best. The nonlinear theoretical model also dominates in our ex ante, out-of-sample forecast of the Great Recession, suggesting that developing forward-looking, microfounded, nonlinear, dynamic stochastic general equilibrium models of the economy may prove crucial in forecasting turning points.  相似文献   

9.
In this study, we consider the effects of state alcohol policies on motor vehicle fatalities for children. While numerous studies have considered the effects of such policies on motor vehicle fatalities for the overall population, for teens, and for the elderly, their effects on fatalities among children in particular have not previously been studied. We use state‐level cross‐sectional time series data for 1982–2002. The dependent variable of interest is fatalities among child motor vehicle occupants (CMVO). Separate models are estimated for 0‐ to 4‐yr‐olds, 5‐ to 9‐yr‐olds, and 10‐ to 15‐yr‐olds, as well as for fatalities occurring during the day versus the night. We find that number of fatalities among CMVO is strongly correlated to alcohol use measured at the state level and that administrative license revocation policies and higher beer tax rates appear to consistently reduce such fatalities. For two of the three age groups, beer tax rates appear to reduce fatalities during the night rather than the day. However, zero tolerance and blood alcohol concentration limit laws do not seem to have any statistically significant effects on fatalities. (JEL I18, J13)  相似文献   

10.
The aim of this paper is to study the spatial patterns in the Spanish local tax system. The three most relevant taxes are analyzed: the property tax, the motor vehicle tax and the building activities tax, which jointly represent 80% of the tax revenue at local level in Spain. Using spatial econometrics procedures, three alternative weight specifications to define competitors are explored: contiguity, distance and a combination of economic and geographical characteristics. After carrying out an exploratory spatial analysis, the results of the estimation of spatial lag and spatial error models confirm positive spatial-autocorrelation for the property tax and the building activities tax, with an order of magnitude between 0.3 and 0.5. On the contrary, the motor vehicle tax does not exhibit a significant spatial pattern.  相似文献   

11.
Rangan Gupta 《Applied economics》2013,45(33):4677-4697
This article considers the ability of large-scale (involving 145 fundamental variables) time-series models, estimated by dynamic factor analysis and Bayesian shrinkage, to forecast real house price growth rates of the four US census regions and the aggregate US economy. Besides the standard Minnesota prior, we also use additional priors that constrain the sum of coefficients of the VAR models. We compare 1- to 24-months-ahead forecasts of the large-scale models over an out-of-sample horizon of 1995:01–2009:03, based on an in-sample of 1968:02–1994:12, relative to a random walk model, a small-scale VAR model comprising just the five real house price growth rates and a medium-scale VAR model containing 36 of the 145 fundamental variables besides the five real house price growth rates. In addition to the forecast comparison exercise across small-, medium- and large-scale models, we also look at the ability of the ‘optimal’ model (i.e. the model that produces the minimum average mean squared forecast error) for a specific region in predicting ex ante real house prices (in levels) over the period of 2009:04 till 2012:02. Factor-based models (classical or Bayesian) perform the best for the North East, Mid-West, West census regions and the aggregate US economy and equally well to a small-scale VAR for the South region. The ‘optimal’ factor models also tend to predict the downward trend in the data when we conduct an ex ante forecasting exercise. Our results highlight the importance of information content in large number of fundamentals in predicting house prices accurately.  相似文献   

12.
A theory of cooperative choice under incomplete information is developed in which agents possess private information at the time of contracting and have agreed on a utilitarian “standard of evaluation” governing choices under complete information. The task is to extend this standard to situations of incomplete information. Our first main result generalizes Harsanyi's (J. Polit. Econ. 63 (1955) 309) classical result to situations of incomplete information, assuming that group preferences satisfy Bayesian Coherence and Interim Pareto Dominance. These axioms are mutually compatible if and only if a common prior exists. We argue that this result partly resolves the impossibility of Bayesian preference aggregation under complete information.  相似文献   

13.
The issue of forming a composition of estimators, which can be forecast from quantitative predictive models or expert opinions, is discussed and a case made for linear combinations. Two methods are presented, one aimed at an optimal, minimum variance composition; and the other, at utilizing linear weights with a directly meaningful probabilistic interpretation. Bayesian estimation methods are used in both cases.  相似文献   

14.
We investigate the ability of small- and medium-scale Bayesian VARs (BVARs) to produce accurate macroeconomic (output and inflation) and credit (loans and lending rate) out-of-sample forecasts during the latest Greek crisis. We implement recently proposed Bayesian shrinkage techniques based on Bayesian hierarchical modeling, and we evaluate the information content of forty-two (42) monthly macroeconomic and financial variables in terms of point and density forecasting. Alternative competing models employed in the study include Bayesian autoregressions (BARs) and time-varying parameter VARs with stochastic volatility, among others. The empirical results reveal that, overall, medium-scale BVARs enriched with economy-wide variables can considerably and consistently improve short-term inflation forecasts. The information content of financial variables, on the other hand, proves to be beneficial for the lending rate density forecasts across forecasting horizons. Both of the above-mentioned results are robust to alternative specification choices, while for the rest of the variables smaller-scale BVARs, or even univariate BARs, produce superior forecasts. Finally, we find that the popular, data-driven, shrinkage methods produce, on average, inferior forecasts compared to the theoretically grounded method considered here.  相似文献   

15.
A variety of methods - including vector autoregression (Bayesian and nonBayesian) and neural networks - are used to construct models of the UK economy, and their forecasting performance is compared.  相似文献   

16.
In this paper, we develop a methodology for forecasting key macroeconomic indicators, based on business survey data. We estimate a large set of models, using an autoregressive specification, with regressors selected from business and household survey data. Our methodology is based on the Bayesian averaging of classical estimates method. Additionally, we examine the impact of deterministic and stochastic seasonality of the business survey time series on the outcome of the forecasting process. We propose an intuitive procedure for incorporating both types of seasonality into the forecasting process. After estimating the specified models, we check the accuracy of the forecasts.  相似文献   

17.
Using Bayesian methods, we re-examine the empirical evidence from Ben-David, Lumsdaine, and Papell (Empir Econ 28:303–319, 2003) regarding structural breaks in the long-run growth path of real output series for a number of OECD countries. Our Bayesian framework allows the number and pattern of structural changes in trend and variance to be endogenously determined. We find little evidence of postwar growth slowdowns across countries, and smaller output volatility for most of the developed countries after the end of World War II. Our empirical findings are consistent with neoclassical growth models, which predict increasing growth over the long run. The majority of the countries we analyze have grown faster in the postwar era as opposed to the period before the first break.  相似文献   

18.
The paper develops a Small Open Economy New Keynesian DSGE-VAR (SOENKDSGE-VAR) model of the South African economy, characterised by incomplete pass-through of exchange rate changes, external habit formation, partial indexation of domestic prices and wages to past inflation, and staggered price and wage setting. The model is estimated using Bayesian techniques on data from the period 1980Q1 to 2003Q2, and then used to forecast output, inflation and nominal short-term interest rate for one-to eight-quarters-ahead over an out-of sample horizon of 2003Q3 to 2010Q4. When the forecast performance of the SOENKDSGE-VAR model is compared with an independently estimated DSGE model, the classical VAR and six alternative BVAR models, we find that, barring the BVAR model based on the SSVS prior on both VAR coefficients and the error covariance, the SOENKDSGE-VAR model is found to perform competitively, if not, better than all the other VAR models.  相似文献   

19.
We use available methods for testing macro models to evaluate a model of China over the period from Deng Xiaoping’s reforms up until the crisis period. Bayesian ranking methods are heavily influenced by controversial priors on the degree of price/wage rigidity. When the overall models are tested by Likelihood or Indirect Inference methods, the New Keynesian model is rejected in favour of one with a fair-sized competitive product market sector. This model behaves quite a lot more ‘flexibly’ than the New Keynesian.  相似文献   

20.
We investigate model uncertainty associated with predictive regressions employed in asset return forecasting research. We use simple combination and Bayesian model averaging (BMA) techniques to compare the performance of these forecasting approaches in short-vs. long-run horizons of S&P500 monthly excess returns. Simple averaging involves an equally-weighted averaging of the forecasts from alternative combinations of factors used in the predictive regressions, whereas BMA involves computing the predictive probability that each model is the true model and uses these predictive probabilities as weights in combing the forecasts from different models. From a given set of multiple factors, we evaluate all possible pricing models to the extent, which they describe the data as dictated by the posterior model probabilities. We find that, while simple averaging compares quite favorably to forecasts derived from a random walk model with drift (using a 10-year out-of-sample iterative period), BMA outperforms simple averaging in longer compared to shorter forecast horizons. Moreover, we find further evidence of the latter when the predictive Bayesian model includes shorter, rather than longer lags of the predictive factors. An interesting outcome of this study tends to illustrate the power of BMA in suppressing model uncertainty through model as well as parameter shrinkage, especially when applied to longer predictive horizons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号