首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
This paper proposes a three-step approach to forecasting time series of electricity consumption at different levels of household aggregation. These series are linked by hierarchical constraints—global consumption is the sum of regional consumption, for example. First, benchmark forecasts are generated for all series using generalized additive models. Second, for each series, the aggregation algorithm ML-Poly, introduced by Gaillard, Stoltz, and van Erven in 2014, finds an optimal linear combination of the benchmarks. Finally, the forecasts are projected onto a coherent subspace to ensure that the final forecasts satisfy the hierarchical constraints. By minimizing a regret criterion, we show that the aggregation and projection steps improve the root mean square error of the forecasts. Our approach is tested on household electricity consumption data; experimental results suggest that successive aggregation and projection steps improve the benchmark forecasts at different levels of household aggregation.  相似文献   

2.
Recent electricity price forecasting studies have shown that decomposing a series of spot prices into a long-term trend-seasonal and a stochastic component, modeling them independently and then combining their forecasts, can yield more accurate point predictions than an approach in which the same regression or neural network model is calibrated to the prices themselves. Here, considering two novel extensions of this concept to probabilistic forecasting, we find that (i) efficiently calibrated non-linear autoregressive with exogenous variables (NARX) networks can outperform their autoregressive counterparts, even without combining forecasts from many runs, and that (ii) in terms of accuracy it is better to construct probabilistic forecasts directly from point predictions. However, if speed is a critical issue, running quantile regression on combined point forecasts (i.e., committee machines) may be an option worth considering. Finally, we confirm an earlier observation that averaging probabilities outperforms averaging quantiles when combining predictive distributions in electricity price forecasting.  相似文献   

3.
Identifying the most appropriate time series model to achieve a good forecasting accuracy is a challenging task. We propose a novel algorithm that aims to mitigate the importance of model selection, while increasing the accuracy. Multiple time series are constructed from the original time series, using temporal aggregation. These derivative series highlight different aspects of the original data, as temporal aggregation helps in strengthening or attenuating the signals of different time series components. In each series, the appropriate exponential smoothing method is fitted and its respective time series components are forecast. Subsequently, the time series components from each aggregation level are combined, then used to construct the final forecast. This approach achieves a better estimation of the different time series components, through temporal aggregation, and reduces the importance of model selection through forecast combination. An empirical evaluation of the proposed framework demonstrates significant improvements in forecasting accuracy, especially for long-term forecasts.  相似文献   

4.
Accurate solar forecasts are necessary to improve the integration of solar renewables into the energy grid. In recent years, numerous methods have been developed for predicting the solar irradiance or the output of solar renewables. By definition, a forecast is uncertain. Thus, the models developed predict the mean and the associated uncertainty. Comparisons are therefore necessary and useful for assessing the skill and accuracy of these new methods in the field of solar energy.The aim of this paper is to present a comparison of various models that provide probabilistic forecasts of the solar irradiance within a very strict framework. Indeed, we consider focusing on intraday forecasts, with lead times ranging from 1 to 6 h. The models selected use only endogenous inputs for generating the forecasts. In other words, the only inputs of the models are the past solar irradiance data. In this context, the most common way of generating the forecasts is to combine point forecasting methods with probabilistic approaches in order to provide prediction intervals for the solar irradiance forecasts. For this task, we selected from the literature three point forecasting models (recursive autoregressive and moving average (ARMA), coupled autoregressive and dynamical system (CARDS), and neural network (NN)), and seven methods for assessing the distribution of their error (linear model in quantile regression (LMQR), weighted quantile regression (WQR), quantile regression neural network (QRNN), recursive generalized autoregressive conditional heteroskedasticity (GARCHrls), sieve bootstrap (SB), quantile regression forest (QRF), and gradient boosting decision trees (GBDT)), leading to a comparison of 20 combinations of models.None of the model combinations clearly outperform the others; nevertheless, some trends emerge from the comparison. First, the use of the clear sky index ensures the accuracy of the forecasts. This derived parameter permits time series to be deseasonalized with missing data, and is also a good explanatory variable of the distribution of the forecasting errors. Second, regardless of the point forecasting method used, linear models in quantile regression, weighted quantile regression and gradient boosting decision trees are able to forecast the prediction intervals accurately.  相似文献   

5.
Agricultural price forecasting has been being abandoned progressively by researchers ever since the development of large-scale agricultural futures markets. However, as with many other agricultural goods, there is no futures market for wine. This paper draws on the agricultural prices forecasting literature to develop a forecasting model for bulk wine prices. The price data include annual and monthly series for various wine types that are produced in the Bordeaux region. The predictors include several leading economic indicators of supply and demand shifts. The stock levels and quantities produced are found to have the highest predictive power. The preferred annual and monthly forecasting models outperform naive random walk forecasts by 27.1% and 3.4% respectively; their mean absolute percentage errors are 2.7% and 3.4% respectively. A simple trading strategy based on monthly forecasts is estimated to increase profits by 3.3% relative to a blind strategy that consists of always selling at the spot price.  相似文献   

6.
A crucial challenge for telecommunications companies is how to forecast changes in demand for specific products over the next 6 to 18 months—the length of a typical short-range capacity-planning and capital-budgeting planning horizon. The problem is especially acute when only short histories of product sales are available. This paper presents a new two-level approach to forecasting demand from short-term data. The lower of the two levels consists of adaptive system-identification algorithms borrowed from signal processing, especially, Hidden Markov Model (HMM) methods [Hidden Markov Models: Estimation and Control (1995) Springer Verlag]. Although they have primarily been used in engineering applications such as automated speech recognition and seismic data processing, HMM techniques also appear to be very promising for predicting probabilities of individual customer behaviors from relatively short samples of recent product-purchasing histories. The upper level of our approach applies a classification tree algorithm to combine information from the lower-level forecasting algorithms. In contrast to other forecast-combination algorithms, such as weighted averaging or Bayesian aggregation formulas, the classification tree approach exploits high-order interactions among error patterns from different predictive systems. It creates a hybrid, forecasting algorithm that out-performs any of the individual algorithms on which it is based. This tree-based approach to hybridizing forecasts provides a new, general way to combine and improve individual forecasts, whether or not they are based on HMM algorithms. The paper concludes with the results of validation tests. These show the power of HMM methods to forecast what individual customers are likely to do next. They also show the gain from classification tree post-processing of the predictions from lower-level forecasts. In essence, these techniques enhance the limited techniques available for new product forecasting.  相似文献   

7.
This paper presents our 13th place solution to the M5 Forecasting - Uncertainty challenge and compares it against GoodsForecast’s second-place solution. This challenge aims to estimate the median and eight other quantiles of various product sales in Walmart. Both solutions handle the predictions of median and other quantiles separately. Our solution hybridizes LightGBM and DeepAR in various ways for median and quantile estimation, based on the aggregation levels of the sales. Similarly, GoodsForecast’s solution also utilized a hybrid approach, i.e., LightGBM for point estimation and a Histogram algorithm for quantile estimation. In this paper, the differences between the two solutions and their results are highlighted. Despite our solution only taking 13th place in the challenge with the competition metric, it achieves the lowest average rank based on the multiple comparisons with the best (MCB) test which implies the most accurate forecasts in the majority of the series. It also indicates better performance at the product-store aggregation level which comprises 30,490 (71.2% of all) series compared to most teams.  相似文献   

8.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

9.
Recently, Patton and Timmermann (2012) proposed a more powerful kind of forecast efficiency regression at multiple horizons, and showed that it provides evidence against the efficiency of the Fed’s Greenbook forecasts. I use their forecast efficiency evaluation to propose a method for adjusting the Greenbook forecasts. Using this method in a real-time out-of-sample forecasting exercise, I find that it provides modest improvements in the accuracies of the forecasts for the GDP deflator and CPI, but not for other variables. The improvements are statistically significant in some cases, with magnitudes of up to 18% in root mean square prediction error.  相似文献   

10.
In many different contexts, decision-making is improved by the availability of probabilistic predictions. The accuracy of probabilistic forecasting methods can be compared using scoring functions and insight provided by calibration tests. These tests evaluate the consistency of predictions with the observations. Our main agenda in this paper is interval forecasts and their evaluation. Such forecasts are usually bounded by two quantile forecasts. However, a limitation of quantiles is that they convey no information regarding the size of potential exceedances. By contrast, the location of an expectile is dictated by the whole distribution. This prompts us to propose expectile-bounded intervals. We provide interpretation, a consistent scoring function and a calibration test. Before doing this, we reflect on the evaluation of forecasts of quantile-bounded intervals and expectiles, and suggest extensions of previously proposed calibration tests in order to guard against strategic forecasting. We illustrate ideas using day-ahead electricity price forecasting.  相似文献   

11.
This paper proposes two new weighting schemes that average forecasts based on different estimation windows in order to account for possible structural change. The first scheme weights the forecasts according to the values of reversed ordered CUSUM (ROC) test statistics, while the second weighting method simply assigns heavier weights to forecasts that use more recent information. Simulation results show that, when structural breaks are present, forecasts based on the first weighting scheme outperform those based on a procedure that simply uses ROC tests to choose and forecast from a single post-break estimation window. Combination forecasts based on our second weighting scheme outperform equally weighted combination forecasts. An empirical application based on a NAIRU Phillips curve model for the G7 countries illustrates these findings, and also shows that combination forecasts can outperform the random walk forecasting model.  相似文献   

12.
Quantiles as optimal point forecasts   总被引:1,自引:0,他引:1  
Loss functions play a central role in the theory and practice of forecasting. If the loss function is quadratic, the mean of the predictive distribution is the unique optimal point predictor. If the loss is symmetric piecewise linear, any median is an optimal point forecast. Quantiles arise as optimal point forecasts under a general class of economically relevant loss functions, which nests the asymmetric piecewise linear loss, and which we refer to as generalized piecewise linear (GPL). The level of the quantile depends on a generic asymmetry parameter which reflects the possibly distinct costs of underprediction and overprediction. Conversely, a loss function for which quantiles are optimal point forecasts is necessarily GPL. We review characterizations of this type in the work of Thomson, Saerens and Komunjer, and relate to proper scoring rules, incentive-compatible compensation schemes and quantile regression. In the empirical part of the paper, the relevance of decision theoretic guidance in the transition from a predictive distribution to a point forecast is illustrated using the Bank of England’s density forecasts of United Kingdom inflation rates, and probabilistic predictions of wind energy resources in the Pacific Northwest.  相似文献   

13.
It has long been known that combination forecasting strategies produce superior out-of-sample forecasting performances. In the M4 forecasting competition, a very simple forecast combination strategy achieved third place on yearly time series. An analysis of the ensemble model and its component models suggests that the competitive accuracy comes from avoiding poor forecasts, rather than from beating the best individual models. Moreover, the simple ensemble model can be fitted very quickly, can easily scale horizontally with additional CPU cores or a cluster of computers, and can be implemented by users very quickly and easily. This approach might be of particular interest to users who need accurate yearly forecasts without being able to spend significant time, resources, or expertise on tuning models. Users of the R statistical programming language can access this modeling approach using the “forecastHybrid” package.  相似文献   

14.
We consider whether survey density forecasts (such as the inflation and output growth histograms of the US Survey of Professional Forecasters) are superior to unconditional density forecasts. The unconditional forecasts assume that the average level of uncertainty that has been experienced in the past will continue to prevail in the future, whereas the SPF projections ought to be adapted to the current conditions and the outlook at each forecast origin. The SPF forecasts might be expected to outperform the unconditional densities at the shortest horizons, but it transpires that such is not the case for the aggregate forecasts of either variable, or for the majority of the individual respondents for forecasting inflation.  相似文献   

15.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

16.
We examined automatic feature identification and graphical support in rule-based expert systems for forecasting. The rule-based expert forecasting system (RBEFS) includes predefined rules to automatically identify features of a time series and selects the extrapolation method to be used. The system can also integrate managerial judgment using a graphical interface that allows a user to view alternate extrapolation methods two at a time. The use of the RBEFS led to a significant improvement in accuracy compared to equal-weight combinations of forecasts. Further improvement were achieved with the user interface. For 6-year ahead ex ante forecasts, the rule-based expert forecasting system has a median absolute percentage error (MdAPE) 15% less than that of equally weighted combined forecasts and a 33% improvement over the random walk. The user adjusted forecasts had a MdAPE 20% less than that of the expert system. The results of the system are also compared to those of an earlier rule-based expert system which required human judgments about some features of the time series data. The results of the comparison of the two rule-based expert systems showed no significant differences between them.  相似文献   

17.
Combining forecasts from multiple temporal aggregation levels exploits information differences and mitigates model uncertainty, while reconciliation ensures a unified prediction that supports aligned decisions at different horizons. It can be challenging to estimate the full cross-covariance matrix for a temporal hierarchy, which can easily be of very large dimension, yet it is difficult to know a priori which part of the error structure is most important. To address these issues, we propose to use eigendecomposition for dimensionality reduction when reconciling forecasts to extract as much information as possible from the error structure given the data available. We evaluate the proposed estimator in a simulation study and demonstrate its usefulness through applications to short-term electricity load and financial volatility forecasting. We find that accuracy can be improved uniformly across all aggregation levels, as the estimator achieves state-of-the-art accuracy while being applicable to hierarchies of all sizes.  相似文献   

18.
This paper examines the accuracy of various methods of forecasting long-term earnings growth for firms in the electric utility industry. In addition to a number of extrapolative techniques, Value Line analyst forecasts are also evaluated. Value Line analyst forecasts for a five-year time horizon are found to be superior to many of the extrapolative models. Among the extrapolative models examined, implied growth and historical book value per share growth rate models performed best. These results provide strong support for using Value Line growth forecasts in cost of capital estimates for electric utilities in the context of utility rate cases. Value Line forecast errors could be explained by changes in dividend payout ratios, the firm's regulatory environment and bond rating changes.  相似文献   

19.
This article investigates the evidence of time‐variation and asymmetry in the persistence of US inflation. We compare the out‐of‐sample performance of different forecasting models and find that quantile forecasts from an Auto‐Regressive (AR) model with level‐dependent volatility are at least as accurate as the forecasts of the Quantile Auto‐Regressive model, in particular for the core inflation measures. Our results indicate that the persistence of core inflation has been relatively constant and high, but it declined for the headline inflation measures. We also find that the asymmetric persistence of inflation shocks can be mostly attributed to the positive relation between inflation level and its volatility.  相似文献   

20.
Parametric quantile regression is a useful tool for obtaining probabilistic energy forecasts. Nonetheless, traditional quantile regressions may be complicated to obtain using complex data mining techniques (e.g., artificial neural networks), since they are trained using a non-differentiable cost function. This article presents a method that uses a new nearest neighbors quantile filter to obtain quantile regressions independently of the data mining technique utilized and without the non-differentiable cost function. This method is subsequently validated using the dataset from the 2014 Global Energy Forecasting Competition. The results show that the method presented here is able to solve the competition’s task with a similar accuracy to the competition’s winner and in a similar timeframe, but requiring a much less powerful computer. This property may be relevant in an online forecasting service for which the fast computation of probabilistic forecasts using less powerful machines is required.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号