首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Combining exponential smoothing forecasts using Akaike weights   总被引:1,自引:0,他引:1  
Simple forecast combinations such as medians and trimmed or winsorized means are known to improve the accuracy of point forecasts, and Akaike’s Information Criterion (AIC) has given rise to so-called Akaike weights, which have been used successfully to combine statistical models for inference and prediction in specialist fields, e.g., ecology and medicine. We examine combining exponential smoothing point and interval forecasts using weights derived from AIC, small-sample-corrected AIC and BIC on the M1 and M3 Competition datasets. Weighted forecast combinations perform better than forecasts selected using information criteria, in terms of both point forecast accuracy and prediction interval coverage. Simple combinations and weighted combinations do not consistently outperform one another, while simple combinations sometimes perform worse than single forecasts selected by information criteria. We find a tendency for a longer history to be associated with a better prediction interval coverage.  相似文献   

2.
This article provides a first analysis of the forecasts of inflation and GDP growth obtained from the Bank of England's Survey of External Forecasters, considering both the survey average forecasts published in the quarterly Inflation Report, and the individual survey responses, recently made available by the Bank. These comprise a conventional incomplete panel dataset, with an additional dimension arising from the collection of forecasts at several horizons; both point forecasts and density forecasts are collected. The inflation forecasts show good performance in tests of unbiasedness and efficiency, albeit over a relatively calm period for the UK economy, and there is considerable individual heterogeneity. For GDP growth, inaccurate real-time data and their subsequent revisions are seen to cause serious difficulties for forecast construction and evaluation, although the forecasts are again unbiased. There is evidence that some forecasters have asymmetric loss functions.  相似文献   

3.
We evaluate the performances of various methods for forecasting tourism data. The data used include 366 monthly series, 427 quarterly series and 518 annual series, all supplied to us by either tourism bodies or academics who had used them in previous tourism forecasting studies. The forecasting methods implemented in the competition are univariate and multivariate time series approaches, and econometric models. This forecasting competition differs from previous competitions in several ways: (i) we concentrate on tourism data only; (ii) we include approaches with explanatory variables; (iii) we evaluate the forecast interval coverage as well as the point forecast accuracy; (iv) we observe the effect of temporal aggregation on the forecasting accuracy; and (v) we consider the mean absolute scaled error as an alternative forecasting accuracy measure. We find that pure time series approaches provide more accurate forecasts for tourism data than models with explanatory variables. For seasonal data we implement three fully automated pure time series algorithms that generate accurate point forecasts, and two of these also produce forecast coverage probabilities which are satisfactorily close to the nominal rates. For annual data we find that Naïve forecasts are hard to beat.  相似文献   

4.
A government’s ability to forecast key economic fundamentals accurately can affect business confidence, consumer sentiment, and foreign direct investment, among others. A government forecast based on an econometric model is replicable, whereas one that is not fully based on an econometric model is non-replicable. Governments typically provide non-replicable forecasts (or expert forecasts) of economic fundamentals, such as the inflation rate and real GDP growth rate.In this paper, we develop a methodology for evaluating non-replicable forecasts. We argue that in order to do so, one needs to retrieve from the non-replicable forecast its replicable component, and that it is the difference in accuracy between these two that matters. An empirical example to forecast economic fundamentals for Taiwan shows the relevance of the proposed methodological approach. Our main finding is that the undocumented knowledge of the Taiwanese government reduces forecast errors substantially.  相似文献   

5.
Forecast combination through dimension reduction techniques   总被引:2,自引:0,他引:2  
This paper considers several methods of producing a single forecast from several individual ones. We compare “standard” but hard to beat combination schemes (such as the average of forecasts at each period, or consensus forecast and OLS-based combination schemes) with more sophisticated alternatives that involve dimension reduction techniques. Specifically, we consider principal components, dynamic factor models, partial least squares and sliced inverse regression.Our source of forecasts is the Survey of Professional Forecasters, which provides forecasts for the main US macroeconomic aggregates. The forecasting results show that partial least squares, principal component regression and factor analysis have similar performances (better than the usual benchmark models), but sliced inverse regression shows an extreme behavior (performs either very well or very poorly).  相似文献   

6.
A geometric interpretation is developed for so-called reconciliation methodologies used to forecast time series that adhere to known linear constraints. In particular, a general framework is established that nests many existing popular reconciliation methods within the class of projections. This interpretation facilitates the derivation of novel theoretical results. First, reconciliation via projection is guaranteed to improve forecast accuracy with respect to a class of loss functions based on a generalised distance metric. Second, the Minimum Trace (MinT) method minimises expected loss for this same class of loss functions. Third, the geometric interpretation provides a new proof that forecast reconciliation using projections results in unbiased forecasts, provided that the initial base forecasts are also unbiased. Approaches for dealing with biased base forecasts are proposed. An extensive empirical study of Australian tourism flows demonstrates the theoretical results of the paper and shows that bias correction prior to reconciliation outperforms alternatives that only bias-correct or only reconcile forecasts.  相似文献   

7.
It is common practice to evaluate fixed-event forecast revisions in macroeconomics by regressing current forecast revisions on one-period lagged forecast revisions. Under weak-form (forecast) efficiency, the correlation between the current and one-period lagged revisions should be zero. The empirical findings in the literature suggest that this null hypothesis of zero correlation is rejected frequently, and the correlation can be either positive (which is widely interpreted in the literature as “smoothing”) or negative (which is widely interpreted as “over-reacting”). We propose a methodology for interpreting such non-zero correlations in a straightforward and clear manner. Our approach is based on the assumption that numerical forecasts can be decomposed into both an econometric model and random expert intuition. We show that the interpretation of the sign of the correlation between the current and one-period lagged revisions depends on the process governing intuition, and the current and lagged correlations between intuition and news (or shocks to the numerical forecasts). It follows that the estimated non-zero correlation cannot be given a direct interpretation in terms of either smoothing or over-reaction.  相似文献   

8.
Identifying the most appropriate time series model to achieve a good forecasting accuracy is a challenging task. We propose a novel algorithm that aims to mitigate the importance of model selection, while increasing the accuracy. Multiple time series are constructed from the original time series, using temporal aggregation. These derivative series highlight different aspects of the original data, as temporal aggregation helps in strengthening or attenuating the signals of different time series components. In each series, the appropriate exponential smoothing method is fitted and its respective time series components are forecast. Subsequently, the time series components from each aggregation level are combined, then used to construct the final forecast. This approach achieves a better estimation of the different time series components, through temporal aggregation, and reduces the importance of model selection through forecast combination. An empirical evaluation of the proposed framework demonstrates significant improvements in forecasting accuracy, especially for long-term forecasts.  相似文献   

9.
If ‘learning by doing’ is important for macro-forecasting, newcomers might be different from regular, established participants. Stayers may also differ from the soon-to-leave. We test these conjectures for macro-forecasters’ point predictions of output growth and inflation, and for their histogram forecasts. Histogram forecasts of inflation by both joiners and leavers are found to be less accurate, especially if we suppose that joiners take time to learn. For GDP growth, there is no evidence of differences between the groups in terms of histogram forecast accuracy, although GDP point forecasts by leavers are less accurate. These findings are predicated on forecasters being homogeneous within groups. Allowing for individual fixed effects suggests fewer differences, including leavers’ inflation histogram forecasts being no less accurate.  相似文献   

10.
This paper uses the forecast from a random walk model of inflation as a benchmark to test and compare the forecast performance of several alternatives of future inflation, including the Greenbook forecast by the Fed staff, the Survey of Professional Forecasters median forecast, CPI inflation minus food and energy, CPI weighted median inflation, and CPI trimmed mean inflation. The Greenbook forecast was found in previous literature to be a better forecast than other private sector forecasts. Our results indicate that both the Greenbook and the Survey of Professional Forecasters median forecasts of inflation and core inflation measures may contain better information than forecasts from a random walk model. The Greenbook's superiority appears to have declined against other forecasts and core inflation measures.  相似文献   

11.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

12.
In this paper, we define forecast (in)stability in terms of the variability in forecasts for a specific time period caused by updating the forecast for this time period when new observations become available, i.e., as time passes. We propose an extension to the state-of-the-art N-BEATS deep learning architecture for the univariate time series point forecasting problem. The extension allows us to optimize forecasts from both a traditional forecast accuracy perspective as well as a forecast stability perspective. We show that the proposed extension results in forecasts that are more stable without leading to a deterioration in forecast accuracy for the M3 and M4 data sets. Moreover, our experimental study shows that it is possible to improve both forecast accuracy and stability compared to the original N-BEATS architecture, indicating that including a forecast instability component in the loss function can be used as regularization mechanism.  相似文献   

13.
We evaluate conditional predictive densities for US output growth and inflation using a number of commonly-used forecasting models that rely on large numbers of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly-used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can cause point forecasts to either improve or deteriorate, they might have the opposite effect on higher moments. We find that normality is rejected for most models in some dimension according to at least one of the tests we use. Interestingly, however, combinations of predictive densities appear to be approximated correctly by a normal density: the simple, equal average when predicting output growth, and the Bayesian model average when predicting inflation.  相似文献   

14.
We analyze the relationship between forecaster disagreement and macroeconomic uncertainty in the Euro area using data from the European Central Bank’s Survey of Professional Forecasters for the period 1999Q1–2018Q4 and find that disagreement is generally a poor proxy for uncertainty. However, the strength of this link varies with the dispersion statistic employed, the choice of either the point forecasts or the histogram means for calculating disagreement, the outcome variable considered and the forecast horizon. In contrast, distributional assumptions do not appear to be very influential. The relationship is weaker in subsamples before and after the outbreak of the Great Recession. Accounting for the forecasters’ entry to and exit from the survey has little impact on the results. We also show that survey-based uncertainty is associated with overall policy uncertainty, whereas forecaster disagreement is related more closely to the expected fluctuations on financial markets.  相似文献   

15.
Expert opinion is an opinion given by an expert, and it can have significant value in forecasting key policy variables in economics and finance. Expert forecasts can either be expert opinions, or forecasts based on an econometric model. An expert forecast that is based on an econometric model is replicable, and can be defined as a replicable expert forecast (REF), whereas an expert opinion that is not based on an econometric model can be defined as a non-replicable expert forecast (Non-REF). Both REF and Non-REF may be made available by an expert regarding a policy variable of interest. In this paper, we develop a model to generate REF, and compare REF with Non-REF. A method is presented to compare REF and Non-REF using efficient estimation methods, and a direct test of expertise on expert opinion is given. The latter serves the purpose of investigating whether expert adjustment improves the model-based forecasts. Illustrations for forecasting pharmaceutical stock keeping unit (SKUs), where the econometric model is of (variations of) the autoregressive integrated moving average model (ARIMA) type, show the relevance of the new methodology proposed in the paper. In particular, experts possess significant expertise, and expert forecasts are significant in explaining actual sales.  相似文献   

16.
The literature suggests that the dispersion of agents’ forecasts of an event flows from heterogeneity of beliefs and models. Using a data set of fixed event point forecasts of UK GDP growth by a panel of independent forecasters published by HM Treasury, we investigate three questions concerning this dispersion: (a) Are agent’s beliefs randomly distributed or do agents fall into groups with similar beliefs? (b) as agents revise their forecasts, what roles are played by their previous and consensus forecasts? and (c) is an agent’s private information of persistent value? We find that agents fall into four clusters, a large majority, a few pessimists, and two idiosyncratic agents. Our proposed model of forecast revisions shows agents are influenced positively by a change in the consensus forecast and negatively influenced by the previous distance of their forecast from the consensus. We show that the forecasts of a minority of agents significantly lead the consensus.  相似文献   

17.
It is commonly accepted that information is helpful if it can be exploited to improve a decision making process. In economics, decisions are often based on forecasts of the upward or downward movements of the variable of interest. We point out that directional forecasts can provide a useful framework for assessing the economic forecast value when loss functions (or success measures) are properly formulated to account for the realized signs and realized magnitudes of directional movements. We discuss a general approach to (directional) forecast evaluation which is based on the loss function proposed by Granger, Pesaran and Skouras. It is simple to implement and provides an economically interpretable loss/success functional framework. We show that, in addition, this loss function is more robust to outlying forecasts than traditional loss functions. As such, the measure of the directional forecast value is a readily available complement to the commonly used squared error loss criterion.  相似文献   

18.
This paper constructs hybrid forecasts that combine forecasts from vector autoregressive (VAR) model(s) with both short- and long-term expectations from surveys. Specifically, we use the relative entropy to tilt one-step-ahead and long-horizon VAR forecasts to match the nowcasts and long-horizon forecasts from the Survey of Professional Forecasters. We consider a variety of VAR models, ranging from simple fixed-parameter to time-varying parameters. The results across models indicate meaningful gains in multi-horizon forecast accuracy relative to model forecasts that do not incorporate long-term survey conditions. Accuracy improvements are achieved for a range of variables, including those that are not tilted directly but are affected through spillover effects from tilted variables. The accuracy gains for hybrid inflation forecasts from simple VARs are substantial, statistically significant, and competitive to time-varying VARs, univariate benchmarks, and survey forecasts. We view our proposal as an indirect approach to accommodating structural change and moving end points.  相似文献   

19.
This paper proposes a three-step approach to forecasting time series of electricity consumption at different levels of household aggregation. These series are linked by hierarchical constraints—global consumption is the sum of regional consumption, for example. First, benchmark forecasts are generated for all series using generalized additive models. Second, for each series, the aggregation algorithm ML-Poly, introduced by Gaillard, Stoltz, and van Erven in 2014, finds an optimal linear combination of the benchmarks. Finally, the forecasts are projected onto a coherent subspace to ensure that the final forecasts satisfy the hierarchical constraints. By minimizing a regret criterion, we show that the aggregation and projection steps improve the root mean square error of the forecasts. Our approach is tested on household electricity consumption data; experimental results suggest that successive aggregation and projection steps improve the benchmark forecasts at different levels of household aggregation.  相似文献   

20.
Attention has recently been given to combinations of subjective and objective forecasts to improve forecast accuracy. This research offers an extension on this theme by comparing two methods that can be used to adjust an objective forecast. Wolfe and Flores (1990) show that forecasts can be judgmentally adjusted by analysts using a structured approach based on Saaty's analytic hierarchy process (AHP). In this study, the centroid method is introduced as a vehicle for forecast adjustment and is compared to the AHP. While the AHP allows for finer tuning in reflecting decision maker judgement, the centroid method produces very similar results and is much simpler to use in the forecast adjustment process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号