首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
A government’s ability to forecast key economic fundamentals accurately can affect business confidence, consumer sentiment, and foreign direct investment, among others. A government forecast based on an econometric model is replicable, whereas one that is not fully based on an econometric model is non-replicable. Governments typically provide non-replicable forecasts (or expert forecasts) of economic fundamentals, such as the inflation rate and real GDP growth rate.In this paper, we develop a methodology for evaluating non-replicable forecasts. We argue that in order to do so, one needs to retrieve from the non-replicable forecast its replicable component, and that it is the difference in accuracy between these two that matters. An empirical example to forecast economic fundamentals for Taiwan shows the relevance of the proposed methodological approach. Our main finding is that the undocumented knowledge of the Taiwanese government reduces forecast errors substantially.  相似文献   

2.
Macroeconomic forecasts are frequently produced, widely published, intensively discussed, and comprehensively used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyze some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC), and the ECB, are typically based on econometric model forecasts jointly with human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes nonstandard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model and intuition; and (iii) the two forecasts are generated from two distinct (but unknown) combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the (econometric) Staff of the Federal Reserve Board and the FOMC on inflation, unemployment, and real GDP growth. It is shown that the FOMC does not forecast significantly better than the Staff, and that the intuition of the FOMC does not add significantly in forecasting the actual values of the economic fundamentals. This would seem to belie the purported expertise of the FOMC.  相似文献   

3.
It is common practice to evaluate fixed-event forecast revisions in macroeconomics by regressing current forecast revisions on one-period lagged forecast revisions. Under weak-form (forecast) efficiency, the correlation between the current and one-period lagged revisions should be zero. The empirical findings in the literature suggest that this null hypothesis of zero correlation is rejected frequently, and the correlation can be either positive (which is widely interpreted in the literature as “smoothing”) or negative (which is widely interpreted as “over-reacting”). We propose a methodology for interpreting such non-zero correlations in a straightforward and clear manner. Our approach is based on the assumption that numerical forecasts can be decomposed into both an econometric model and random expert intuition. We show that the interpretation of the sign of the correlation between the current and one-period lagged revisions depends on the process governing intuition, and the current and lagged correlations between intuition and news (or shocks to the numerical forecasts). It follows that the estimated non-zero correlation cannot be given a direct interpretation in terms of either smoothing or over-reaction.  相似文献   

4.
It has long been known that combination forecasting strategies produce superior out-of-sample forecasting performances. In the M4 forecasting competition, a very simple forecast combination strategy achieved third place on yearly time series. An analysis of the ensemble model and its component models suggests that the competitive accuracy comes from avoiding poor forecasts, rather than from beating the best individual models. Moreover, the simple ensemble model can be fitted very quickly, can easily scale horizontally with additional CPU cores or a cluster of computers, and can be implemented by users very quickly and easily. This approach might be of particular interest to users who need accurate yearly forecasts without being able to spend significant time, resources, or expertise on tuning models. Users of the R statistical programming language can access this modeling approach using the “forecastHybrid” package.  相似文献   

5.
Traditional econometric models of economic contractions typically perform poorly in forecasting exercises. This criticism is also frequently levelled at professional forecast probabilities of contractions. This paper addresses the problem of incorporating the entire distribution of professional forecasts into an econometric model for forecasting contractions and expansions. A new augmented probit approach is proposed, involving the transformation of the distribution of professional forecasts into a ‘professional forecast’ prior for the economic data underlying the probit model. Since the object of interest is the relationship between the distribution of professional forecasts and the probit model’s economic-data dependent parameters, the solution avoids criticisms levelled at the accuracy of professional forecast based point estimates of contractions. An application to US real GDP data shows that the model yields significant forecast improvements relative to alternative approaches.  相似文献   

6.
We examined automatic feature identification and graphical support in rule-based expert systems for forecasting. The rule-based expert forecasting system (RBEFS) includes predefined rules to automatically identify features of a time series and selects the extrapolation method to be used. The system can also integrate managerial judgment using a graphical interface that allows a user to view alternate extrapolation methods two at a time. The use of the RBEFS led to a significant improvement in accuracy compared to equal-weight combinations of forecasts. Further improvement were achieved with the user interface. For 6-year ahead ex ante forecasts, the rule-based expert forecasting system has a median absolute percentage error (MdAPE) 15% less than that of equally weighted combined forecasts and a 33% improvement over the random walk. The user adjusted forecasts had a MdAPE 20% less than that of the expert system. The results of the system are also compared to those of an earlier rule-based expert system which required human judgments about some features of the time series data. The results of the comparison of the two rule-based expert systems showed no significant differences between them.  相似文献   

7.
This paper reviews a spreadsheet-based forecasting approach which a process industry manufacturer developed and implemented to link annual corporate forecasts with its manufacturing/distribution operations. First, we consider how this forecasting system supports overall production planning and why it must be compatible with corporate forecasts. We then review the results of substantial testing of variations on the Winters three-parameter exponential smoothing model on 28 actual product family time series. In particular, we evaluate whether the use of damping parameters improves forecast accuracy. The paper concludes that a Winters four-parameter model (i.e. the standard Winters three-parameter model augmented by a fourth parameter to damp the trend) provides the most accurate forecasts of the models evaluated. Our application confirms the fact that there are situations where the use of damped trend parameters in short-run exponential smoothing based forecasting models is beneficial.  相似文献   

8.
This paper proposes two new weighting schemes that average forecasts based on different estimation windows in order to account for possible structural change. The first scheme weights the forecasts according to the values of reversed ordered CUSUM (ROC) test statistics, while the second weighting method simply assigns heavier weights to forecasts that use more recent information. Simulation results show that, when structural breaks are present, forecasts based on the first weighting scheme outperform those based on a procedure that simply uses ROC tests to choose and forecast from a single post-break estimation window. Combination forecasts based on our second weighting scheme outperform equally weighted combination forecasts. An empirical application based on a NAIRU Phillips curve model for the G7 countries illustrates these findings, and also shows that combination forecasts can outperform the random walk forecasting model.  相似文献   

9.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

10.
《Economic Systems》2014,38(2):194-204
Understanding how agents formulate their expectations about Fed behavior is important for market participants because they can potentially use this information to make more accurate estimates of stock and bond prices. Although it is commonly assumed that agents learn over time, there is scant empirical evidence in support of this assumption. Thus, in this paper we test if the forecast of the three month T-bill rate in the Survey of Professional Forecasters (SPF) is consistent with least squares learning when there are discrete shifts in monetary policy. We first derive the mean, variance and autocovariances of the forecast errors from a recursive least squares learning algorithm when there are breaks in the structure of the model. We then apply the Bai and Perron (1998) test for structural change to a forecasting model for the three month T-bill rate in order to identify changes in monetary policy. Having identified the policy regimes, we then estimate the implied biases in the interest rate forecasts within each regime. We find that when the forecast errors from the SPF are corrected for the biases due to shifts in policy, the forecasts are consistent with least squares learning.  相似文献   

11.
This paper uses the forecast from a random walk model of inflation as a benchmark to test and compare the forecast performance of several alternatives of future inflation, including the Greenbook forecast by the Fed staff, the Survey of Professional Forecasters median forecast, CPI inflation minus food and energy, CPI weighted median inflation, and CPI trimmed mean inflation. The Greenbook forecast was found in previous literature to be a better forecast than other private sector forecasts. Our results indicate that both the Greenbook and the Survey of Professional Forecasters median forecasts of inflation and core inflation measures may contain better information than forecasts from a random walk model. The Greenbook's superiority appears to have declined against other forecasts and core inflation measures.  相似文献   

12.
There is general agreement in many forecasting contexts that combining individual predictions leads to better final forecasts. However, the relative error reduction in a combined forecast depends upon the extent to which the component forecasts contain unique/independent information. Unfortunately, obtaining independent predictions is difficult in many situations, as these forecasts may be based on similar statistical models and/or overlapping information. The current study addresses this problem by incorporating a measure of coherence into an analytic evaluation framework so that the degree of independence between sets of forecasts can be identified easily. The framework also decomposes the performance and coherence measures in order to illustrate the underlying aspects that are responsible for error reduction. The framework is demonstrated using UK retail prices index inflation forecasts for the period 1998–2014, and implications for forecast users are discussed.  相似文献   

13.
"This paper presents a stochastic version of the demographic cohort-component method of forecasting future population. In this model the sizes of future age-sex groups are non-linear functions of random future vital rates. An approximation to their joint distribution can be obtained using linear approximations or simulation. A stochastic formulation points to the need for new empirical work on both the autocorrelations and the cross-correlations of the vital rates. Problems of forecasting declining mortality and fluctuating fertility are contrasted. A volatility measure for fertility is presented. The model can be used to calculate approximate prediction intervals for births using data from deterministic cohort-component forecasts. The paper compares the use of expert opinion in mortality forecasting with simple extrapolation techniques to see how useful each approach has been in the past. Data from the United States suggest that expert opinion may have caused systematic bias in the forecasts."  相似文献   

14.
Recently, Patton and Timmermann (2012) proposed a more powerful kind of forecast efficiency regression at multiple horizons, and showed that it provides evidence against the efficiency of the Fed’s Greenbook forecasts. I use their forecast efficiency evaluation to propose a method for adjusting the Greenbook forecasts. Using this method in a real-time out-of-sample forecasting exercise, I find that it provides modest improvements in the accuracies of the forecasts for the GDP deflator and CPI, but not for other variables. The improvements are statistically significant in some cases, with magnitudes of up to 18% in root mean square prediction error.  相似文献   

15.
This Briefing Paper is thejirst ofa series of three designeddiscussed is the process of making 'constant adjustments' in forecasts. This process involves modifying the results generated by the econometric model. For the first time we are publishing tables of the constant adjustments used in the current forecast. We explain in general why such adjustments are made and also explain the actual adjustments we have made for this forecast.
The second article of the series, to be published in our February 1983 edition, will describe the potential sources of error in forecasts. In particular it will describe the inevitable stochastic or random element involved in e tatistical attempts to quantify economic behaviour. As a completely new departure the article will report estimates of future errors based on stochastic simulations of the LBS. model and will provide statistical error bad for the main elements of the forecast.
The final article, to be published in our June 1983 edition, will contrast the measures of forecast error that e e obtain from the estimation process and our stochastic e imulationsp with the errors that we have actually made, as revealed by an examination of our forecasting 'track record'. It is hoped to draw, from this comparison, some e eneral conclusions about the scope and limits of econometric forecasting producers.  相似文献   

16.
This paper examines the theoretical and empirical properties of a supervised factor model based on combining forecasts using principal components (CFPC), in comparison with two other supervised factor models (partial least squares regression, PLS, and principal covariate regression, PCovR) and with the unsupervised principal component regression, PCR. The supervision refers to training the predictors for a variable to forecast. We compare the performance of the three supervised factor models and the unsupervised factor model in forecasting of U.S. CPI inflation. The main finding is that the predictive ability of the supervised factor models is much better than the unsupervised factor model. The computation of the factors can be doubly supervised together with variable selection, which can further improve the forecasting performance of the supervised factor models. Among the three supervised factor models, the CFPC best performs and is also most stable. While PCovR also performs well and is stable, the performance of PLS is less stable over different out-of-sample forecasting periods. The effect of supervision gets even larger as forecast horizon increases. Supervision helps to reduce the number of factors and lags needed in modelling economic structure, achieving more parsimony.  相似文献   

17.
Forecast combination through dimension reduction techniques   总被引:2,自引:0,他引:2  
This paper considers several methods of producing a single forecast from several individual ones. We compare “standard” but hard to beat combination schemes (such as the average of forecasts at each period, or consensus forecast and OLS-based combination schemes) with more sophisticated alternatives that involve dimension reduction techniques. Specifically, we consider principal components, dynamic factor models, partial least squares and sliced inverse regression.Our source of forecasts is the Survey of Professional Forecasters, which provides forecasts for the main US macroeconomic aggregates. The forecasting results show that partial least squares, principal component regression and factor analysis have similar performances (better than the usual benchmark models), but sliced inverse regression shows an extreme behavior (performs either very well or very poorly).  相似文献   

18.
In recent years Statistics Netherlands has published several stochastic population forecasts. The degree of uncertainty of the future population is assessed on the basis of assumptions about the probability distribution of future fertility, mortality and migration. The assumptions on fertility are based on an analysis of historic forecasts of the total fertility rate (TFR), on time‐series models of observations of the TFR, and on expert knowledge. This latter argument‐based approach refers to the TFR distinguished by birth order. In the most recent Dutch forecast the 95% forecast interval of the total fertility rate in 2050 is assumed to range from 1.2 to 2.3 children per woman.  相似文献   

19.
The present study reviews the accuracy of four methods (polls, prediction markets, expert judgment, and quantitative models) for forecasting the two German federal elections in 2013 and 2017. On average across both elections, polls and prediction markets were most accurate, while experts and quantitative models were least accurate. However, the accuracy of individual forecasts did not correlate across elections. That is, the methods that were most accurate in 2013 did not perform particularly well in 2017. A combined forecast, calculated by averaging forecasts within and across methods, was more accurate than three of the four component forecasts. The results conform to prior research on US presidential elections in showing that combining is effective in generating accurate forecasts and avoiding large errors.  相似文献   

20.
Local and state governments depend on small area population forecasts to make important decisions concerning the development of local infrastructure and services. Despite their importance, current methods often produce highly inaccurate forecasts. Recent years have witnessed promising developments in time series forecasting using Machine Learning across a wide range of social and economic variables. However, limited work has been undertaken to investigate the potential application of Machine Learning methods in demography, particularly for small area population forecasting. In this paper we describe the development of two Long-Short Term Memory network architectures for small area populations. We employ the Keras Tuner to select layer unit numbers, vary the window width of input data, and apply a double training and validation regime which supports work with short time series and prioritises later sequence values for forecasts. These methods are transferable and can be applied to other data sets. Retrospective small area population forecasts for Australia were created for the periods 2006–16 and 2011–16. Model performance was evaluated against actual data and two benchmark methods (LIN/EXP and CSP-VSG). We also evaluated the impact of constraining small area population forecasts to an independent national forecast. Forecast accuracy was influenced by jump-off year, constraining, area size, and remoteness. The LIN/EXP model was the best performing method for the 2011-based forecasts whilst deep learning methods performed best for the 2006-based forecasts, including significant improvements in the accuracy of 10 year forecasts. However, benchmark methods were consistently more accurate for more remote areas and for those with populations <5000.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号