首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
This paper presents a methodology for producing a probability forecast of a turning point in U.S. economy using Composite Leading Indicators. This methodology is based on classical statistical decision theory and uses information-theoretic measurement to produce a probability. The methodology is flexible using as many historical data points as desired. This methodology is applied to producing probability forecasts of a downturn in U.S. economy in the 1970–1990 period. Four probability forecasts are produced using different amounts of information. The performance of these forecasts is evaluated using the actual downturn points and the scores measuring accuracy, calibration, and resolution. An indirect comparison of these forecasts with Diebold and Rudebusch's sequential probability recursion is also presented. It is shown that the performances of our best two models are statistically different from the performance of the three-consecutive-month decline model and are the same as the one for the best probit model. The probit model, however, is more conservative in its predictions than our two models.  相似文献   

2.
A comparison of the point forecasts and the probability distributions of inflation and output growth made by individual respondents to the US Survey of Professional Forecasters indicates that the two sets of forecasts are sometimes inconsistent. We evaluate a number of possible explanations, and find that not all forecasters update their histogram forecasts as new information arrives. This is supported by the finding that the point forecasts are more accurate than the histograms in terms of first-moment prediction.  相似文献   

3.
We propose to produce accurate point and interval forecasts of exchange rates by combining a number of well known fundamental based panel models. Combination of each model utilizes a set of weights computed using a linear mixture of experts's framework, where weights are determined by log scores assigned to each model's predictive performance. As well as model uncertainty, we take potential structural break in the parameters of the models into consideration. In our application, to quarterly data for ten currencies (including the Euro) for the period 1990q1–2008q4, we show that the forecasts from ensemble models produce mean and interval forecasts that outperform equal weight, and to a lesser extent random walk benchmark models. The gain from combining forecasts is particularly pronounced for longer-horizon forecasts for central forecasts, but much less so for interval forecasts. Calculations of the probability of the exchange rate rising or falling using the combined or ensemble model show a good correspondence with known events and potentially provide a useful measure for uncertainty of whether the exchange rate is likely to rise or fall.  相似文献   

4.
The aims of this study are (i) to identify the main determinants of the demand for French Premiere Division football matches using all matches played during the 1997/1998 season, (ii) to estimate a team-specific probability of success, and (iii) to propose an updating process for the intramatch winning probability. The methodology is tested empirically over an out-of-sample data set using matches of the 1998/1999 season. The results show that football appears to be an inferior product affected by both socio-economic and football variables, and that the main football variables have only a tenuous explanatory power concerning the final outcome of a given match.  相似文献   

5.
Much research has been devoted to the Delphi technique. However, very little substantive work has been done on the subject of Delphi accuracy. The purpose of this effort was to test the accuracy of Delphi vs the conference method in making long-range forecasts. College students were used to form Delphi and conference groups that predicted the point spreads of college football games far in advance of play. The results substantiate the claim that Delphi outperforms conference methods on the basis of accuracy for long-range forecasting.  相似文献   

6.
Ensemble methods can be used to construct a forecast distribution from a collection of point forecasts. They are used extensively in meteorology, but have received little direct attention in economics. In a real-time analysis of the ECB’s Survey of Professional Forecasters, we compare ensemble methods to histogram-based forecast distributions of GDP growth and inflation in the Euro Area. We find that ensembles perform very similarly to histograms, while being simpler to handle in practice. Given the wide availability of surveys that collect point forecasts but not histograms, these results suggest that ensembles deserve further investigation in economics.  相似文献   

7.
We investigate model uncertainty associated with predictive regressions employed in asset return forecasting research. We use simple combination and Bayesian model averaging (BMA) techniques to compare the performance of these forecasting approaches in short-vs. long-run horizons of S&P500 monthly excess returns. Simple averaging involves an equally-weighted averaging of the forecasts from alternative combinations of factors used in the predictive regressions, whereas BMA involves computing the predictive probability that each model is the true model and uses these predictive probabilities as weights in combing the forecasts from different models. From a given set of multiple factors, we evaluate all possible pricing models to the extent, which they describe the data as dictated by the posterior model probabilities. We find that, while simple averaging compares quite favorably to forecasts derived from a random walk model with drift (using a 10-year out-of-sample iterative period), BMA outperforms simple averaging in longer compared to shorter forecast horizons. Moreover, we find further evidence of the latter when the predictive Bayesian model includes shorter, rather than longer lags of the predictive factors. An interesting outcome of this study tends to illustrate the power of BMA in suppressing model uncertainty through model as well as parameter shrinkage, especially when applied to longer predictive horizons.  相似文献   

8.
This paper provides a methodology for combining forecasts based on several discrete choice models. This is achieved primarily by combining one-step-ahead probability forecasts associated with each model. The paper applies well-established scoring rules for qualitative response models in the context of forecast combination. Log scores, quadratic scores and Epstein scores are used to evaluate the forecasting accuracy of each model and to combine the probability forecasts. In addition to producing point forecasts, the effect of sampling variation is also assessed. This methodology is applied to forecast US Federal Open Market Committee (FOMC) decisions regarding changes in the federal funds target rate. Several of the economic fundamentals influencing the FOMC’s decisions are integrated, or I(1), and are modeled in a similar fashion to Hu and Phillips (J Appl Econom 19(7):851– 867, 2004). The empirical results show that combining forecasted probabilities using scores generally outperforms both equal weight combination and forecasts based on multivariate models.  相似文献   

9.
In this paper we use multi-horizon evaluation techniques to produce monthly inflation forecasts for up to twelve months ahead. The forecasts are based on individual seasonal time series models that consider both, deterministic and stochastic seasonality, and on disaggregated Consumer Price Index (CPI) data. After selecting the best forecasting model for each index, we compare the individual forecasts to forecasts produced using two methods that aggregate hierarchical time series, the bottom-up method and an optimal combination approach. Applying these techniques to 16 indices of the Mexican CPI, we find that the best forecasts for headline inflation are able to compete with those taken from surveys of experts.  相似文献   

10.
Abstract.  This paper assesses the out-of-sample forecasting accuracy of the New Keynesian Model for Canada. We estimate a variant of the model on a series of rolling subsamples, computing out-of-sample forecasts one to eight quarters ahead at each step. We compare these forecasts with those arising from vector autoregression (VAR) models, using econometric tests of forecasting accuracy. We show that the forecasting accuracy of the New Keynesian Model compares favourably with that of the benchmarks, particularly as the forecasting horizon increases. These results suggest that the model could become a useful forecasting tool for Canadian time series.  相似文献   

11.
Extensive research has been devoted to the quality of analysts' earnings forecasts. The common finding is that analysts' forecasts are not very accurate. Prior studies have tended to focus on the mean of forecasts and measure accuracy using various summaries of forecast errors. The present study sheds new light on the accuracy of analysts' forecasts, by measuring how well calibrated these forecasts are. The authors follow the tradition of calibration studies in psychological literature and measure the degree of calibration by the hit rate. They analyze a year's worth of data from the Institutional Brokers Estimate System database, which includes over 200,000 annual earnings forecasts made by over 6,000 analysts for over 5,000 companies. By using different ways to convert analysts' point estimates of earnings into a range of values, the authors establish the bounds that are necessary to determine the hit rates, and examine to what extent the actual earnings announced by the companies are bracketed by these intervals. These hit rates provide a more complete picture of the accuracy of the forecasts.  相似文献   

12.
People were shown charts of computer-created ‘time series’, and were asked for their projections of the ‘future’ direction they expected the series to take. The characteristics of the time series shown in the charts were systematically varied so as to allow inferences as to the effect of these characteristics on the respondents' forecasts. The series constructed were like sine curves with random error terms tacked on. Respondents showed a poor ability to forecast, and their forecasts had a considerable upward bias. However, they did appreciate the cyclical nature of the series, and their forecasts show that they were looking for the next turning point. Responses to any particular series were highly unstable: adding on an extra point frequently made a big difference in the forecasts that respondents made.  相似文献   

13.
Several recent studies in experimental economics have tried to measure beliefs of subjects engaged in strategic games with other subjects. Using data from one such study we conduct an experiment where our experienced subjects observe early rounds of strategy choices from that study and are given monetary incentives to report forecasts of choices in later rounds. We elicit beliefs using three different scoring rules: linear, logarithmic, and quadratic. We compare forecasts across the scoring rules and compare the forecasts of our trained observers to forecasts of the actual players in the original experiment. We find significant differences across scoring rules. The improper linear scoring rule produces forecasts closer to 0 and 1 than the proper rules, and these forecasts are poorly calibrated. The two proper scoring rules induce significantly different distributions of forecasts. We find that forecasts by observers under both proper scoring rules are significantly different from the forecasts of the actual players, in terms of accuracy, calibration, and the distribution of forecasts. We also find evidence for belief convergence among the observers.  相似文献   

14.
The Federal Open Market Committee (FOMC) of the U.S. Federal Reserve publishes the range of members’ forecasts for key macroeconomic variables, but not the distribution of forecasts within this range. To evaluate these projections, previous papers compare the midpoint of the range with the realized outcome. This paper proposes an alternative approach to forecast evaluation that takes account of the interval nature of projections. It is shown that using the conventional Mincer–Zarnowitz approach to evaluate FOMC forecasts misses important information contained in the width of the forecast interval. This additional information plays a minor role at short forecast horizons but turns out to be of sometimes crucial importance for longer-horizon forecasts. For 18-month-ahead forecasts, the variation of members’ projections contains information that is more relevant for explaining future inflation than information embodied in the midpoint. Likewise, when longer-range forecasts for real GDP growth and the unemployment rate are considered, the width of the forecast interval comprises information over and above the one given by the midpoint alone.  相似文献   

15.
We employ a 10-variable dynamic structural general equilibrium model to forecast the US real house price index as well as its downturn in 2006:Q2. We also examine various Bayesian and classical time-series models in our forecasting exercise to compare to the dynamic stochastic general equilibrium model, estimated using Bayesian methods. In addition to standard vector-autoregressive and Bayesian vector autoregressive models, we also include the information content of either 10 or 120 quarterly series in some models to capture the influence of fundamentals. We consider two approaches for including information from large data sets — extracting common factors (principle components) in factor-augmented vector autoregressive or Bayesian factor-augmented vector autoregressive models as well as Bayesian shrinkage in a large-scale Bayesian vector autoregressive model. We compare the out-of-sample forecast performance of the alternative models, using the average root mean squared error for the forecasts. We find that the small-scale Bayesian-shrinkage model (10 variables) outperforms the other models, including the large-scale Bayesian-shrinkage model (120 variables). In addition, when we use simple average forecast combinations, the combination forecast using the 10 best atheoretical models produces the minimum RMSEs compared to each of the individual models, followed closely by the combination forecast using the 10 atheoretical models and the DSGE model. Finally, we use each model to forecast the downturn point in 2006:Q2, using the estimated model through 2005:Q2. Only the dynamic stochastic general equilibrium model actually forecasts a downturn with any accuracy, suggesting that forward-looking microfounded dynamic stochastic general equilibrium models of the housing market may prove crucial in forecasting turning points.  相似文献   

16.
We examine the informational content of New Zealand data releases using a parametric dynamic factor model estimated with unbalanced real-time panels of quarterly data. The data are categorised into 21 different release blocks, allowing us to make 21 different factor model forecasts each quarter. We compare three of these factor model forecasts for real GDP growth, CPI inflation, non-tradable CPI inflation, and tradable CPI inflation with three different real-time forecasts made by the Reserve Bank of New Zealand each quarter. We find that, at some horizons, the factor model produces forecasts of similar accuracy to the Reserve Bank's forecasts. Analysing the marginal value of each of the data releases reveals the importance of the business opinion survey data—the Quarterly Survey of Business Opinion and the National Bank's Business Outlook survey—in determining how factor model predictions, and the uncertainty around those predictions, evolve through each quarter.  相似文献   

17.
For the timely detection of business-cycle turning points we suggest to use medium-sized linear systems (subset VARs with automated zero restrictions) to forecast monthly industrial production index publications one to several steps ahead, and to derive the probability of the turning point from the bootstrapped forecast density as the probability mass below (or above) a suitable threshold value. We show how this approach can be used in real time in the presence of data publication lags and how it can capture the part of the data revision process that is systematic. Out-of-sample evaluation exercises show that the method is competitive especially in the case of the US, while turning-point forecasts are in general more difficult in Germany.  相似文献   

18.
This paper investigates the accuracy and heterogeneity of output growth and inflation forecasts during the current and the four preceding NBER-dated US recessions. We generate forecasts from six different models of the US economy and compare them to professional forecasts from the Federal Reserve??s Greenbook and the Survey of Professional Forecasters (SPF). The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations. While the particular reasons for diversity in professional forecasts are not observable, the diversity in model forecasts can be traced to different modeling assumptions, information sets and parameter estimates.  相似文献   

19.
Using realized volatility to estimate conditional variance of financial returns, we compare forecasts of volatility from linear GARCH models with asymmetric ones. We consider horizons extending to 30 days. Forecasts are compared using three different evaluation tests. With data from an equity index and two foreign exchange returns, we show that asymmetric models provide statistically significant forecast improvements upon the GARCH model for two of the datasets and improve forecasts for all datasets by means of forecasts combinations. These results extend to about 10 days in the future, beyond which the forecasts are statistically inseparable from each other.  相似文献   

20.
We compare the out-of-sample performance of monthly returns forecasts for two indices, namely the Dow Jones (DJ) and the Financial Times (FT) indices. A linear and a nonlinear artificial neural network (ANN) model are used to generate the out-of-sample competing forecasts for monthly returns. Stationary transformations of dividends and trading volume are considered as fundamental explanatory variables in the linear model and the input variables in the ANN model. The comparison of out-of-sample forecasts is done on the basis of forecast accuracy, using the Diebold and Mariano test [J. Bus. Econ. Stat. 13 (1995) 253.], and forecast encompassing, using the Clements and Hendry approach [J. Forecast. 5 (1998) 559.]. The results suggest that the out-of-sample ANN forecasts are significantly more accurate than linear forecasts of both indices. Furthermore, the ANN forecasts can explain the forecast errors of the linear model for both indices, while the linear model cannot explain the forecast errors of the ANN in either of the two indices. Overall, the results indicate that the inclusion of nonlinear terms in the relation between stock returns and fundamentals is important in out-of-sample forecasting. This conclusion is consistent with the view that the relation between stock returns and fundamentals is nonlinear.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号