首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 189 毫秒
1.
Probabilistic forecasts are necessary for robust decisions in the face of uncertainty. The M5 Uncertainty competition required participating teams to forecast nine quantiles for unit sales of various products at various aggregation levels and for different time horizons. This paper evaluates the forecasting performance of the quantile forecasts at different aggregation levels and at different quantile levels. We contrast this with some theoretical predictions, and discuss potential implications and promising future research directions for the practice of probabilistic forecasting.  相似文献   

2.
When forecasting time series in a hierarchical configuration, it is necessary to ensure that the forecasts reconcile at all levels. The 2017 Global Energy Forecasting Competition (GEFCom2017) focused on addressing this topic. Quantile forecasts for eight zones and two aggregated zones in New England were required for every hour of a future month. This paper presents a new methodology for forecasting quantiles in a hierarchy which outperforms a commonly-used benchmark model. A simulation-based approach was used to generate demand forecasts. Adjustments were made to each of the demand simulations to ensure that all zonal forecasts reconciled appropriately, and a weighted reconciliation approach was implemented to ensure that the bottom-level zonal forecasts summed correctly to the aggregated zonal forecasts. We show that reconciling in this manner improves the forecast accuracy. A discussion of the results and modelling performances is presented, and brief reviews of hierarchical time series forecasting and gradient boosting are also included.  相似文献   

3.
Forecast reconciliation is a post-forecasting process aimed to improve the quality of the base forecasts for a system of hierarchical/grouped time series. Cross-sectional and temporal hierarchies have been considered in the literature, but generally, these two features have not been fully considered together. The paper presents two new results by adopting a notation that simultaneously deals with both forecast reconciliation dimensions. (i) The closed-form expression of the optimal (in the least squares sense) point forecasts fulfilling both contemporaneous and temporal constraints. (ii) An iterative procedure that produces cross-temporally reconciled forecasts by alternating forecast reconciliation along one single dimension (either cross-sectional or temporal) at each iteration step. The feasibility of the proposed procedures, along with first evaluations of their performance as compared to the most performing ‘single dimension’ (either cross-sectional or temporal) forecast reconciliation procedures, is studied through a forecasting experiment on the 95 quarterly time series of the Australian Gross Domestic Product from Income and Expenditure sides. For this dataset, the new procedures, in addition to providing fully coherent forecasts in both cross-sectional and temporal dimensions, improve the forecast accuracy of the state-of-the-art point forecast reconciliation techniques.  相似文献   

4.
Identifying the most appropriate time series model to achieve a good forecasting accuracy is a challenging task. We propose a novel algorithm that aims to mitigate the importance of model selection, while increasing the accuracy. Multiple time series are constructed from the original time series, using temporal aggregation. These derivative series highlight different aspects of the original data, as temporal aggregation helps in strengthening or attenuating the signals of different time series components. In each series, the appropriate exponential smoothing method is fitted and its respective time series components are forecast. Subsequently, the time series components from each aggregation level are combined, then used to construct the final forecast. This approach achieves a better estimation of the different time series components, through temporal aggregation, and reduces the importance of model selection through forecast combination. An empirical evaluation of the proposed framework demonstrates significant improvements in forecasting accuracy, especially for long-term forecasts.  相似文献   

5.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

6.
We propose the construction of copulas through the inversion of nonlinear state space models. These copulas allow for new time series models that have the same serial dependence structure as a state space model, but with an arbitrary marginal distribution, and flexible density forecasts. We examine the time series properties of the copulas, outline serial dependence measures, and estimate the models using likelihood-based methods. Copulas constructed from three example state space models are considered: a stochastic volatility model with an unobserved component, a Markov switching autoregression, and a Gaussian linear unobserved component model. We show that all three inversion copulas with flexible margins improve the fit and density forecasts of quarterly U.S. broad inflation and electricity inflation.  相似文献   

7.
Policymakers need to know whether prediction is possible and, if so, whether any proposed forecasting method will provide forecasts that are substantially more accurate than those from the relevant benchmark method. An inspection of global temperature data suggests that temperature is subject to irregular variations on all relevant time scales, and that variations during the late 1900s were not unusual. In such a situation, a “no change” extrapolation is an appropriate benchmark forecasting method. We used the UK Met Office Hadley Centre’s annual average thermometer data from 1850 through 2007 to examine the performance of the benchmark method. The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. For example, mean absolute errors for the 20- and 50-year horizons were 0.18  C and 0.24  C respectively. We nevertheless demonstrate the use of benchmarking with the example of the Intergovernmental Panel on Climate Change’s 1992 linear projection of long-term warming at a rate of 0.03  C per year. The small sample of errors from ex ante projections at 0.03  C per year for 1992 through 2008 was practically indistinguishable from the benchmark errors. Validation for long-term forecasting, however, requires a much longer horizon. Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth—the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions.  相似文献   

8.
Hierarchical forecasting with intermittent time series is a challenge in both research and empirical studies. Extensive research focuses on improving the accuracy of each hierarchy, especially the intermittent time series at bottom levels. Then, hierarchical reconciliation can be used to improve the overall performance further. In this paper, we present a hierarchical-forecasting-with-alignment approach that treats the bottom-level forecasts as mutable to ensure higher forecasting accuracy on the upper levels of the hierarchy. We employ a pure deep learning forecasting approach, N-BEATS, for continuous time series at the top levels, and a widely used tree-based algorithm, LightGBM, for intermittent time series at the bottom level. The hierarchical-forecasting-with-alignment approach is a simple yet effective variant of the bottom-up method, accounting for biases that are difficult to observe at the bottom level. It allows suboptimal forecasts at the lower level to retain a higher overall performance. The approach in this empirical study was developed by the first author during the M5 Accuracy competition, ranking second place. The method is also business orientated and can be used to facilitate strategic business planning.  相似文献   

9.
Recent electricity price forecasting studies have shown that decomposing a series of spot prices into a long-term trend-seasonal and a stochastic component, modeling them independently and then combining their forecasts, can yield more accurate point predictions than an approach in which the same regression or neural network model is calibrated to the prices themselves. Here, considering two novel extensions of this concept to probabilistic forecasting, we find that (i) efficiently calibrated non-linear autoregressive with exogenous variables (NARX) networks can outperform their autoregressive counterparts, even without combining forecasts from many runs, and that (ii) in terms of accuracy it is better to construct probabilistic forecasts directly from point predictions. However, if speed is a critical issue, running quantile regression on combined point forecasts (i.e., committee machines) may be an option worth considering. Finally, we confirm an earlier observation that averaging probabilities outperforms averaging quantiles when combining predictive distributions in electricity price forecasting.  相似文献   

10.
A crucial challenge for telecommunications companies is how to forecast changes in demand for specific products over the next 6 to 18 months—the length of a typical short-range capacity-planning and capital-budgeting planning horizon. The problem is especially acute when only short histories of product sales are available. This paper presents a new two-level approach to forecasting demand from short-term data. The lower of the two levels consists of adaptive system-identification algorithms borrowed from signal processing, especially, Hidden Markov Model (HMM) methods [Hidden Markov Models: Estimation and Control (1995) Springer Verlag]. Although they have primarily been used in engineering applications such as automated speech recognition and seismic data processing, HMM techniques also appear to be very promising for predicting probabilities of individual customer behaviors from relatively short samples of recent product-purchasing histories. The upper level of our approach applies a classification tree algorithm to combine information from the lower-level forecasting algorithms. In contrast to other forecast-combination algorithms, such as weighted averaging or Bayesian aggregation formulas, the classification tree approach exploits high-order interactions among error patterns from different predictive systems. It creates a hybrid, forecasting algorithm that out-performs any of the individual algorithms on which it is based. This tree-based approach to hybridizing forecasts provides a new, general way to combine and improve individual forecasts, whether or not they are based on HMM algorithms. The paper concludes with the results of validation tests. These show the power of HMM methods to forecast what individual customers are likely to do next. They also show the gain from classification tree post-processing of the predictions from lower-level forecasts. In essence, these techniques enhance the limited techniques available for new product forecasting.  相似文献   

11.
Local and state governments depend on small area population forecasts to make important decisions concerning the development of local infrastructure and services. Despite their importance, current methods often produce highly inaccurate forecasts. Recent years have witnessed promising developments in time series forecasting using Machine Learning across a wide range of social and economic variables. However, limited work has been undertaken to investigate the potential application of Machine Learning methods in demography, particularly for small area population forecasting. In this paper we describe the development of two Long-Short Term Memory network architectures for small area populations. We employ the Keras Tuner to select layer unit numbers, vary the window width of input data, and apply a double training and validation regime which supports work with short time series and prioritises later sequence values for forecasts. These methods are transferable and can be applied to other data sets. Retrospective small area population forecasts for Australia were created for the periods 2006–16 and 2011–16. Model performance was evaluated against actual data and two benchmark methods (LIN/EXP and CSP-VSG). We also evaluated the impact of constraining small area population forecasts to an independent national forecast. Forecast accuracy was influenced by jump-off year, constraining, area size, and remoteness. The LIN/EXP model was the best performing method for the 2011-based forecasts whilst deep learning methods performed best for the 2006-based forecasts, including significant improvements in the accuracy of 10 year forecasts. However, benchmark methods were consistently more accurate for more remote areas and for those with populations <5000.  相似文献   

12.
This paper uses the forecast from a random walk model of inflation as a benchmark to test and compare the forecast performance of several alternatives of future inflation, including the Greenbook forecast by the Fed staff, the Survey of Professional Forecasters median forecast, CPI inflation minus food and energy, CPI weighted median inflation, and CPI trimmed mean inflation. The Greenbook forecast was found in previous literature to be a better forecast than other private sector forecasts. Our results indicate that both the Greenbook and the Survey of Professional Forecasters median forecasts of inflation and core inflation measures may contain better information than forecasts from a random walk model. The Greenbook's superiority appears to have declined against other forecasts and core inflation measures.  相似文献   

13.
Combining forecasts from multiple temporal aggregation levels exploits information differences and mitigates model uncertainty, while reconciliation ensures a unified prediction that supports aligned decisions at different horizons. It can be challenging to estimate the full cross-covariance matrix for a temporal hierarchy, which can easily be of very large dimension, yet it is difficult to know a priori which part of the error structure is most important. To address these issues, we propose to use eigendecomposition for dimensionality reduction when reconciling forecasts to extract as much information as possible from the error structure given the data available. We evaluate the proposed estimator in a simulation study and demonstrate its usefulness through applications to short-term electricity load and financial volatility forecasting. We find that accuracy can be improved uniformly across all aggregation levels, as the estimator achieves state-of-the-art accuracy while being applicable to hierarchies of all sizes.  相似文献   

14.
There are two potential directions of forecast combination: combining for adaptation and combining for improvement. The former direction targets the performance of the best forecaster, while the latter attempts to combine forecasts to improve on the best forecaster. It is often useful to infer which goal is more appropriate so that a suitable combination method may be used. This paper proposes an AI-AFTER approach that can not only determine the appropriate goal of forecast combination but also intelligently combine the forecasts to automatically achieve the proper goal. As a result of this approach, the combined forecasts from AI-AFTER perform well universally in both adaptation and improvement scenarios. The proposed forecasting approach is implemented in our R package AIafter, which is available at https://github.com/weiqian1/AIafter.  相似文献   

15.
The efficient flow of goods and services involves addressing multilevel forecast questions, and careful consideration when aggregating or disaggregating hierarchical estimates. Assessing all possible aggregation alternatives helps to determine the statistically most accurate way of consolidating multilevel forecasts. However, doing so in a multilevel and multiproduct supply chain may prove to be a very computationally intensive and time-consuming task. In this paper, we present a new, two-level oblique linear discriminant tree model, which identifies the optimal hierarchical forecast technique for a given hierarchical database in a very time-efficient manner. We induced our model from a real-world dataset, and it separates all historical time series into the four aggregation mechanisms considered. The separation process is a function of both the positive and negative correlation groups' variances at the lowest level of the hierarchical datasets. Our primary contributions are: (1) establishing a clear-cut relationship between the correlation metrics at the lowest level of the hierarchy and the optimal aggregation mechanism for a product/service hierarchy, and (2) developing an analytical model for personalized forecast aggregation decisions, based on characteristics of a hierarchical dataset.  相似文献   

16.
This paper proposes a simple procedure for obtaining monthly assessments of short-run perspectives for quarterly world GDP and trade. It combines high-frequency information from emerging and advanced countries so as to explain quarterly national accounts variables through bridge models. The union of all bridge equations leads to our world bridge model (WBM). The WBM approach of this paper is new for two reasons: its equations combine traditional short-run bridging with theoretical level-relationships, and it is the first time that forecasts of world GDP and trade have been computed for both advanced and emerging countries on the basis of a real-time database of approximately 7000 time series. Although the performances of the equations that are searched automatically should be taken as a lower bound, our results show that the forecasting ability of the WBM is superior to the benchmark. Finally, our results confirm that the use of revised data leads to models’ forecasting performances being overstated significantly.  相似文献   

17.
Interest in density forecasts (as opposed to solely modeling the conditional mean) arises from the possibility of dynamics in higher moments of a time series, as well as in forecasting the probability of future events in some applications. By combining the idea of Markov bootstrapping with that of kernel density estimation, this paper presents a simple non-parametric method for estimating out-of-sample multi-step density forecasts. The paper also considers a host of evaluation tests for examining the dynamic misspecification of estimated density forecasts by targeting autocorrelation, heteroskedasticity and neglected non-linearity. These tests are useful, as a rejection of the tests gives insight into ways to improve a particular forecasting model. In an extensive Monte Carlo analysis involving a range of commonly used linear and non-linear time series processes, the non-parametric method is shown to work reasonably well across the simulated models for a suitable choice of the bandwidth (smoothing parameter). Furthermore, an application of the method to the U.S. Industrial Production series provides multi-step density forecasts that show no sign of dynamic misspecification.  相似文献   

18.
There has been much controversy over the use of the Experience Curve for forecasting purposes. The Experience Curve model has been criticised both on theoretical grounds and because of the practical problems of using it. An alternative model of experience effects due to Towill has certain attractions from the standpoint of theory. However, a rather deeper question is whether experience curve type models produce superior forecasts to those derived using extrapolative techniques.This paper examines these questions in the context of three time series taken from the electricity supply industry, viz: average thermal efficiency; works costs; and price of electricity. The two latter series require price deflation. Both the implied GDP consumption deflator, and a wholesale price index for fuel and electricity were used for this purpose. It is argued that because of the absence of substitutes and of the effects of competition, along with the high quality of data available on the electricity supply industry, these three series provide a favourable test of the experience curve approach to forecasting. The two experience curves performed on the whole markedly worse than the simpler extrapolative methods on the two financial series examined. For the average thermal efficiency series the Towill model and the Experience Curve model marginally outperformed the extrapolative methods.Overall, there was little support for using either the Experience Curve or Towill models. These are obviously more difficult to use than simple univariate models and do not provide significantly better forecasts. Moreover, the Towill model gave rise to considerable estimation and specification problems with the data used here.  相似文献   

19.
Forecasting economic and financial variables with global VARs   总被引:1,自引:0,他引:1  
This paper considers the problem of forecasting economic and financial variables across a large number of countries in the global economy. To this end a global vector autoregressive (GVAR) model, previously estimated by Dees, di Mauro, Pesaran, and Smith (2007) and Dees, Holly, Pesaran, and Smith (2007) over the period 1979Q1–2003Q4, is used to generate out-of-sample forecasts one and four quarters ahead for real output, inflation, real equity prices, exchange rates and interest rates over the period 2004Q1–2005Q4. Forecasts are obtained for 134 variables from 26 regions, which are made up of 33 countries and cover about 90% of the world output. The forecasts are compared to typical benchmarks: univariate autoregressive and random walk models. Building on the forecast combination literature, the effects of model and estimation uncertainty on forecast outcomes are examined by pooling forecasts obtained from different GVAR models estimated over alternative sample periods. Given the size of the modelling problem, and the heterogeneity of the economies considered–industrialised, emerging, and less developed countries–as well as the very real likelihood of possibly multiple structural breaks, averaging forecasts across both models and windows makes a significant difference. Indeed, the double-averaged GVAR forecasts perform better than the benchmark competitors, especially for output, inflation and real equity prices.  相似文献   

20.
Large databases mapping commodity flows measured in various units such as currency, tons or caloric values are the backbone of many recent environmental-economic studies. Their construction typically requires combining large amounts of partial information in a series of successive steps. These include the estimation of unobserved flows, transformations between units, handling aggregation re-classification and, finally, reconciling estimates with mass, financial and/or energy balances. This paper proposes a maximum entropy model that allows for the simultaneous estimation of unobserved commodity flows as well as corresponding prices such that data constraints in various units of measurement, levels of aggregation and possibly mismatching classifications are simultaneously satisfied. Its capability is assessed through a Monte-Carlo analysis and its performance compared with a simple step-wise approach. Our results suggest that the simultaneous approach performs significantly better in a vast majority of cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号