首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Identifying the most appropriate time series model to achieve a good forecasting accuracy is a challenging task. We propose a novel algorithm that aims to mitigate the importance of model selection, while increasing the accuracy. Multiple time series are constructed from the original time series, using temporal aggregation. These derivative series highlight different aspects of the original data, as temporal aggregation helps in strengthening or attenuating the signals of different time series components. In each series, the appropriate exponential smoothing method is fitted and its respective time series components are forecast. Subsequently, the time series components from each aggregation level are combined, then used to construct the final forecast. This approach achieves a better estimation of the different time series components, through temporal aggregation, and reduces the importance of model selection through forecast combination. An empirical evaluation of the proposed framework demonstrates significant improvements in forecasting accuracy, especially for long-term forecasts.  相似文献   

2.
We evaluate the performances of various methods for forecasting tourism data. The data used include 366 monthly series, 427 quarterly series and 518 annual series, all supplied to us by either tourism bodies or academics who had used them in previous tourism forecasting studies. The forecasting methods implemented in the competition are univariate and multivariate time series approaches, and econometric models. This forecasting competition differs from previous competitions in several ways: (i) we concentrate on tourism data only; (ii) we include approaches with explanatory variables; (iii) we evaluate the forecast interval coverage as well as the point forecast accuracy; (iv) we observe the effect of temporal aggregation on the forecasting accuracy; and (v) we consider the mean absolute scaled error as an alternative forecasting accuracy measure. We find that pure time series approaches provide more accurate forecasts for tourism data than models with explanatory variables. For seasonal data we implement three fully automated pure time series algorithms that generate accurate point forecasts, and two of these also produce forecast coverage probabilities which are satisfactorily close to the nominal rates. For annual data we find that Naïve forecasts are hard to beat.  相似文献   

3.
In this research, we propose a disaster response model combining preparedness and responsiveness strategies. The selective response depends on the level of accuracy that our forecasting models can achieve. In order to decide the right geographical space and time window of response, forecasts are prepared and assessed through a spatial–temporal aggregation framework, until we find the optimum level of aggregation. The research considers major earthquake data for the period 1985–2014. Building on the produced forecasts, we develop accordingly a disaster response model. The model is dynamic in nature, as it is updated every time a new event is added in the database. Any forecasting model can be optimized though the proposed spatial–temporal forecasting framework, and as such our results can be easily generalized. This is true for other forecasting methods and in other disaster response contexts.  相似文献   

4.
While combining forecasts is well-known to reduce error, the question of how to best combine forecasts remains. Prior research suggests that combining is most beneficial when relying on diverse forecasts that incorporate different information. Here, I provide evidence in support of this hypothesis by analyzing data from the PollyVote project, which has published combined forecasts of the popular vote in U.S. presidential elections since 2004. Prior to the 2020 election, the PollyVote revised its original method of combining forecasts by, first, restructuring individual forecasts based on their underlying information and, second, adding naïve forecasts as a new component method. On average across the last 100 days prior to the five elections from 2004 to 2020, the revised PollyVote reduced the error of the original specification by eight percent and, with a mean absolute error (MAE) of 0.8 percentage points, was more accurate than any of its component forecasts. The results suggest that, when deciding about which forecasts to include in the combination, forecasters should be more concerned about the component forecasts’ diversity than their historical accuracy.  相似文献   

5.
We develop an iterative and efficient information-theoretic estimator for forecasting interval-valued data, and use our estimator to forecast the SP500 returns up to five days ahead using moving windows. Our forecasts are based on 13 years of data. We show that our estimator is superior to its competitors under all of the common criteria that are used to evaluate forecasts of interval data. Our approach differs from other methods that are used to forecast interval data in two major ways. First, rather than applying the more traditional methods that use only certain moments of the intervals in the estimation process, our estimator uses the complete sample information. Second, our method simultaneously selects the model (or models) and infers the model’s parameters. It is an iterative approach that imposes minimal structure and statistical assumptions.  相似文献   

6.
Combining exponential smoothing forecasts using Akaike weights   总被引:1,自引:0,他引:1  
Simple forecast combinations such as medians and trimmed or winsorized means are known to improve the accuracy of point forecasts, and Akaike’s Information Criterion (AIC) has given rise to so-called Akaike weights, which have been used successfully to combine statistical models for inference and prediction in specialist fields, e.g., ecology and medicine. We examine combining exponential smoothing point and interval forecasts using weights derived from AIC, small-sample-corrected AIC and BIC on the M1 and M3 Competition datasets. Weighted forecast combinations perform better than forecasts selected using information criteria, in terms of both point forecast accuracy and prediction interval coverage. Simple combinations and weighted combinations do not consistently outperform one another, while simple combinations sometimes perform worse than single forecasts selected by information criteria. We find a tendency for a longer history to be associated with a better prediction interval coverage.  相似文献   

7.
This paper proposes a three-step approach to forecasting time series of electricity consumption at different levels of household aggregation. These series are linked by hierarchical constraints—global consumption is the sum of regional consumption, for example. First, benchmark forecasts are generated for all series using generalized additive models. Second, for each series, the aggregation algorithm ML-Poly, introduced by Gaillard, Stoltz, and van Erven in 2014, finds an optimal linear combination of the benchmarks. Finally, the forecasts are projected onto a coherent subspace to ensure that the final forecasts satisfy the hierarchical constraints. By minimizing a regret criterion, we show that the aggregation and projection steps improve the root mean square error of the forecasts. Our approach is tested on household electricity consumption data; experimental results suggest that successive aggregation and projection steps improve the benchmark forecasts at different levels of household aggregation.  相似文献   

8.
The dynamic behavior of the term structure of interest rates is difficult to replicate with models, and even models with a proven track record of empirical performance have underperformed since the early 2000s. On the other hand, survey expectations can accurately predict yields, but they are typically not available for all maturities and/or forecast horizons. We show how survey expectations can be exploited to improve the accuracy of yield curve forecasts given by a base model. We do so by employing a flexible exponential tilting method that anchors the model forecasts to the survey expectations, and we develop a test to guide the choice of the anchoring points. The method implicitly incorporates into yield curve forecasts any information that survey participants have access to—such as information about the current state of the economy or forward‐looking information contained in monetary policy announcements—without the need to explicitly model it. We document that anchoring delivers large and significant gains in forecast accuracy relative to the class of models that are widely adopted by financial and policy institutions for forecasting the term structure of interest rates.  相似文献   

9.
Recent research has found that macroeconomic survey forecasts of uncertainty exhibit several deficiencies, such as horizon-dependent biases and lower levels of accuracy than simple unconditional uncertainty forecasts. We examine the inflation uncertainty forecasts from the Bank of England, the Banco Central do Brasil, the Magyar Nemzeti Bank and the Sveriges Riksbank to assess whether central banks’ uncertainty forecasts might be subject to similar problems. We find that, while most central banks’ uncertainty forecasts also tend to be underconfident at short horizons and overconfident at longer horizons, they are mostly not significantly biased. Moreover, they tend to be at least as precise as unconditional uncertainty forecasts from two different approaches.  相似文献   

10.
We summarize the literature on the effectiveness of combining forecasts by assessing the conditions under which combining is most valuable. Using data on the six US presidential elections from 1992 to 2012, we report the reductions in error obtained by averaging forecasts within and across four election forecasting methods: poll projections, expert judgment, quantitative models, and the Iowa Electronic Markets. Across the six elections, the resulting combined forecasts were more accurate than any individual component method, on average. The gains in accuracy from combining increased with the numbers of forecasts used, especially when these forecasts were based on different methods and different data, and in situations involving high levels of uncertainty. Such combining yielded error reductions of between 16% and 59%, compared to the average errors of the individual forecasts. This improvement is substantially greater than the 12% reduction in error that had been reported previously for combining forecasts.  相似文献   

11.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

12.
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used returns measured over 5 or 10 min intervals. We show that the new estimator is substantially more precise.  相似文献   

13.
There is general agreement in many forecasting contexts that combining individual predictions leads to better final forecasts. However, the relative error reduction in a combined forecast depends upon the extent to which the component forecasts contain unique/independent information. Unfortunately, obtaining independent predictions is difficult in many situations, as these forecasts may be based on similar statistical models and/or overlapping information. The current study addresses this problem by incorporating a measure of coherence into an analytic evaluation framework so that the degree of independence between sets of forecasts can be identified easily. The framework also decomposes the performance and coherence measures in order to illustrate the underlying aspects that are responsible for error reduction. The framework is demonstrated using UK retail prices index inflation forecasts for the period 1998–2014, and implications for forecast users are discussed.  相似文献   

14.
We propose a multivariate dynamic intensity peaks-over-threshold model to capture extremes in multivariate return processes. The random occurrence of extremes is modeled by a multivariate dynamic intensity model, while temporal clustering of their size is captured by an autoregressive multiplicative error model. Applying the model to daily returns of three major stock indexes yields strong empirical support for a temporal clustering of both the occurrence and the size of extremes. Backtesting value-at-risk and expected shortfall forecasts shows that the consideration of clustering effects and of feedback between the magnitudes and the intensity of extremes results in better forecasts of risk.  相似文献   

15.
The potential of group (vs. individual) forecasting is analyzed from the perspective of the social psychology of groups. The social decision scheme theory (SDST) is summarized, and several simulations are presented to demonstrate the dependence of group aggregation accuracy upon factors such as group size, the accuracy and distribution of individual forecasts, and shared representations of the forecasting problem. Many advantages and disadvantages of group aggregation are identified and related to four generic methods of group aggregation (statistical aggregation, prediction markets, the Delphi method, and face-to-face discussion). A number of aspects of forecasting problems are identified which should govern whether or not group forecasting can be relied upon, and if so, what aggregation method should be used.  相似文献   

16.
We consider two criteria for evaluating election forecasts: accuracy (precision) and lead (distance from the event), specifically the trade-off between the two in poll-based forecasts. We evaluate how much “lead” still allows prediction of the election outcome. How much further back can we go, supposing we tolerate a little more error? Our analysis offers estimates of the “optimal” lead time for election forecasts, based on a dataset of over 26,000 vote intention polls from 338 elections in 44 countries between 1942 and 2014. We find that optimization of a forecast is possible, and typically occurs two to three months before the election, but can be influenced by the arrangement of political institutions. To demonstrate how our optimization guidelines perform in practice, we consider recent elections in the UK, the US, and France.  相似文献   

17.
How effective are different approaches for the provision of forecasting support? Forecasts may be either unaided or made with the help of statistical forecasts. In practice, the latter are often crude forecasts that do not take sporadic perturbations into account. Most research considers forecasts based on series that have been cleansed of perturbation effects. This paper considers an experiment in which people made forecasts from time series that were disturbed by promotions. In all conditions, under-forecasting occurred during promotional periods and over-forecasting during normal ones. The relative sizes of these effects depended on the proportions of periods in the data series that contained promotions. The statistical forecasts improved the forecasting accuracy, not because they reduced these biases, but because they decreased the random error (scatter). The performance improvement did not depend on whether the forecasts were based on cleansed series. Thus, the effort invested in producing cleansed time series from which to forecast may not be warranted: companies may benefit from giving their forecasters even crude statistical forecasts. In a second experiment, forecasters received optimal statistical forecasts that took the effects of promotions into account fully. This increased the accuracy because the biases were almost eliminated and the random error was reduced by 20%. Thus, the additional effort required to produce forecasts that take promotional effects into account is worthwhile.  相似文献   

18.
We analyze the narratives that accompany the numerical forecasts in the Bank of England’s Quarterly Inflation Reports, 1997–2018. We focus on whether the narratives contain useful information about the future course of key macro variables over and above the point predictions, in terms of whether the narratives can be used to enhance the accuracy of the numerical forecasts. We also consider whether the narratives are able to predict future changes in the numerical forecasts. We find that a measure of sentiment derived from the narratives can predict the errors in the numerical forecasts of output growth, but not of inflation. We find no evidence that past changes in sentiment predict subsequent changes in the point forecasts of output growth or of inflation, but do find that the adjustments to the numerical output growth forecasts have a systematic element.  相似文献   

19.
Decision makers often observe point forecasts of the same variable computed, for instance, by commercial banks, IMF and the World Bank, but the econometric models used by such institutions are frequently unknown. This paper shows how to use the information available on point forecasts to compute optimal density forecasts. Our idea builds upon the combination of point forecasts under general loss functions and unknown forecast error distributions. We use real‐time data to forecast the density of US inflation. The results indicate that the proposed method materially improves the real‐time accuracy of density forecasts vis‐à‐vis those from the (unknown) individual econometric models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
How did DSGE model forecasts perform before, during and after the financial crisis, and what type of off-model information can improve the forecast accuracy? We tackle these questions by assessing the real-time forecast performance of a large DSGE model relative to statistical and judgmental benchmarks over the period from 2000 to 2013. The forecasting performances of all methods deteriorate substantially following the financial crisis. That is particularly evident for the DSGE model’s GDP forecasts, but augmenting the model with a measure of survey expectations made its GDP forecasts more accurate, which supports the idea that timely off-model information is particularly useful in times of financial distress.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号