首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the context of smart grids and load balancing, daily peak load forecasting has become a critical activity for stakeholders in the energy industry. An understanding of peak magnitude and timing is paramount for the implementation of smart grid strategies such as peak shaving. The modelling approach proposed in this paper leverages high-resolution and low-resolution information to forecast daily peak demand size and timing. The resulting multi-resolution modelling framework can be adapted to different model classes. The key contributions of this paper are (a) a general and formal introduction to the multi-resolution modelling approach, (b) a discussion of modelling approaches at different resolutions implemented via generalised additive models and neural networks, and (c) experimental results on real data from the UK electricity market. The results confirm that the predictive performance of the proposed modelling approach is competitive with that of low- and high-resolution alternatives.  相似文献   

2.
This paper proposes a three-step approach to forecasting time series of electricity consumption at different levels of household aggregation. These series are linked by hierarchical constraints—global consumption is the sum of regional consumption, for example. First, benchmark forecasts are generated for all series using generalized additive models. Second, for each series, the aggregation algorithm ML-Poly, introduced by Gaillard, Stoltz, and van Erven in 2014, finds an optimal linear combination of the benchmarks. Finally, the forecasts are projected onto a coherent subspace to ensure that the final forecasts satisfy the hierarchical constraints. By minimizing a regret criterion, we show that the aggregation and projection steps improve the root mean square error of the forecasts. Our approach is tested on household electricity consumption data; experimental results suggest that successive aggregation and projection steps improve the benchmark forecasts at different levels of household aggregation.  相似文献   

3.
The generalised additive models (GAM) are widely used in data analysis. In the application of the GAM, the link function involved is usually assumed to be a commonly used one without justification. Motivated by a real data example with binary response where the commonly used link function does not work, we propose a generalised additive models with unknown link function (GAMUL) for various types of data, including binary, continuous and ordinal. The proposed estimators are proved to be consistent and asymptotically normal. Semiparametric efficiency of the estimators is demonstrated in terms of their linear functionals. In addition, an iterative algorithm, where all estimators can be expressed explicitly as a linear function of Y, is proposed to overcome the computational hurdle for the GAM type model. Extensive simulation studies conducted in this paper show the proposed estimation procedure works very well. The proposed GAMUL are finally used to analyze a real dataset about loan repayment in China, which leads to some interesting findings.  相似文献   

4.
Electric load forecasting is a crucial part of business operations in the energy industry. Various load forecasting methods and techniques have been proposed and tested. With growing concerns about cybersecurity and malicious data manipulations, an emerging topic is to develop robust load forecasting models. In this paper, we propose a robust support vector regression (SVR) model to forecast the electricity demand under data integrity attacks. We first introduce a weight function to calculate the relative importance of each observation in the load history. We then construct a weighted quadratic surface SVR model. Some theoretical properties of the proposed model are derived. Extensive computational experiments are based on the publicly available data from Global Energy Forecasting Competition 2012 and ISO New England. To imitate data integrity attacks, we have deliberately increased or decreased the historical load data. Finally, the computational results demonstrate better accuracy of the proposed robust model over other recently proposed robust models in the load forecasting literature.  相似文献   

5.
This paper suggests a novel inhomogeneous Markov switching approach for the probabilistic forecasting of industrial companies’ electricity loads, for which the load switches at random times between production and standby regimes. The model that we propose describes the transitions between the regimes using a hidden Markov chain with time-varying transition probabilities that depend on calendar variables. We model the demand during the production regime using an autoregressive moving-average (ARMA) process with seasonal patterns, whereas we use a much simpler model for the standby regime in order to reduce the complexity. The maximum likelihood estimation of the parameters is implemented using a differential evolution algorithm. Using the continuous ranked probability score (CRPS) to evaluate the goodness-of-fit of our model for probabilistic forecasting, it is shown that this model often outperforms classical additive time series models, as well as homogeneous Markov switching models. We also propose a simple procedure for classifying load profiles into those with and without regime-switching behaviors.  相似文献   

6.
Ten models of consumer demand, including the indirect translog model, the linear expenditure system and the Rotterdam model, are applied to Swedish data and compared, using fit and predictive performance as criteria. Demand models tend to be superior to naive models, but nonadditive models are not clearly better than additive. Each model is estimated on two levels of commodity aggregation. The results show that the estimated structures depend on the level of aggregation.  相似文献   

7.
The performance of six classes of models in forecasting different types of economic series is evaluated in an extensive pseudo out‐of‐sample exercise. One of these forecasting models, regularized data‐rich model averaging (RDRMA), is new in the literature. The findings can be summarized in four points. First, RDRMA is difficult to beat in general and generates the best forecasts for real variables. This performance is attributed to the combination of regularization and model averaging, and it confirms that a smart handling of large data sets can lead to substantial improvements over univariate approaches. Second, the ARMA(1,1) model emerges as the best to forecast inflation changes in the short run, while RDRMA dominates at longer horizons. Third, the returns on the S&P 500 index are predictable by RDRMA at short horizons. Finally, the forecast accuracy and the optimal structure of the forecasting equations are quite unstable over time.  相似文献   

8.
For many companies, automatic forecasting has come to be an essential part of business analytics applications. The large amounts of data available, the short life-cycle of the analysis and the acceleration of business operations make traditional manual data analysis unfeasible in such environments. In this paper, an automatic forecasting support system that comprises several methods and models is developed in a general state space framework built in the SSpace toolbox written for Matlab. Some of the models included are well-known, such as exponential smoothing and ARIMA, but we also propose a new model family that has been used only very rarely in this context, namely unobserved components models. Additional novelties include the use of unobserved components models in an automatic identification environment and the comparison of their forecasting performances with those of exponential smoothing and ARIMA models estimated using different software packages. The new system is tested empirically on a daily dataset of all of the products sold by a franchise chain in Spain (166 products over a period of 517 days). The system works well in practice and the proposed automatic unobserved components models compare very favorably with other methods and other well-known software packages in forecasting terms.  相似文献   

9.
We present a hierarchical architecture based on recurrent neural networks for predicting disaggregated inflation components of the Consumer Price Index (CPI). While the majority of existing research is focused on predicting headline inflation, many economic and financial institutions are interested in its partial disaggregated components. To this end, we developed the novel Hierarchical Recurrent Neural Network (HRNN) model, which utilizes information from higher levels in the CPI hierarchy to improve predictions at the more volatile lower levels. Based on a large dataset from the US CPI-U index, our evaluations indicate that the HRNN model significantly outperforms a vast array of well-known inflation prediction baselines. Our methodology and results provide additional forecasting measures and possibilities to policy and market makers on sectoral and component-specific price changes.  相似文献   

10.
We construct a real-time dataset (FRED-SD) with vintage data for the U.S. states that can be used to forecast both state-level and national-level variables. Our dataset includes approximately 28 variables per state, including labor-market, production, and housing variables. We conduct two sets of real-time forecasting exercises. The first forecasts state-level labor-market variables using five different models and different levels of industrially disaggregated data. The second forecasts a national-level variable exploiting the cross-section of state data. The state-forecasting experiments suggest that large models with industrially disaggregated data tend to have higher predictive ability for industrially diversified states. For national-level data, we find that forecasting and aggregating state-level data can outperform a random walk but not an autoregression. We compare these real-time data experiments with forecasting experiments using final-vintage data and find very different results. Because these final-vintage results are obtained with revised data that would not have been available at the time the forecasts would have been made, we conclude that the use of real-time data is essential for drawing proper conclusions about state-level forecasting models.  相似文献   

11.
Combining forecasts from multiple temporal aggregation levels exploits information differences and mitigates model uncertainty, while reconciliation ensures a unified prediction that supports aligned decisions at different horizons. It can be challenging to estimate the full cross-covariance matrix for a temporal hierarchy, which can easily be of very large dimension, yet it is difficult to know a priori which part of the error structure is most important. To address these issues, we propose to use eigendecomposition for dimensionality reduction when reconciling forecasts to extract as much information as possible from the error structure given the data available. We evaluate the proposed estimator in a simulation study and demonstrate its usefulness through applications to short-term electricity load and financial volatility forecasting. We find that accuracy can be improved uniformly across all aggregation levels, as the estimator achieves state-of-the-art accuracy while being applicable to hierarchies of all sizes.  相似文献   

12.
As the internet’s footprint continues to expand, cybersecurity is becoming a major concern for both governments and the private sector. One such cybersecurity issue relates to data integrity attacks. This paper focuses on the power industry, where the forecasting processes rely heavily on the quality of the data. Data integrity attacks are expected to harm the performances of forecasting systems, which will have a major impact on both the financial bottom line of power companies and the resilience of power grids. This paper reveals the effect of data integrity attacks on the accuracy of four representative load forecasting models (multiple linear regression, support vector regression, artificial neural networks, and fuzzy interaction regression). We begin by simulating some data integrity attacks through the random injection of some multipliers that follow a normal or uniform distribution into the load series. Then, the four aforementioned load forecasting models are used to generate one-year-ahead ex post point forecasts in order to provide a comparison of their forecast errors. The results show that the support vector regression model is most robust, followed closely by the multiple linear regression model, while the fuzzy interaction regression model is the least robust of the four. Nevertheless, all four models fail to provide satisfying forecasts when the scale of the data integrity attacks becomes large. This presents a serious challenge to both load forecasters and the broader forecasting community: the generation of accurate forecasts under data integrity attacks. We construct our case study using the publicly-available data from Global Energy Forecasting Competition 2012. At the end, we also offer an overview of potential research topics for future studies.  相似文献   

13.
We explore a new approach to the forecasting of macroeconomic variables based on a dynamic factor state space analysis. Key economic variables are modeled jointly with principal components from a large time series panel of macroeconomic indicators using a multivariate unobserved components time series model. When the key economic variables are observed at a low frequency and the panel of macroeconomic variables is at a high frequency, we can use our approach for both nowcasting and forecasting purposes. Given a dynamic factor model as the data generation process, we provide Monte Carlo evidence of the finite-sample justification of our parsimonious and feasible approach. We also provide empirical evidence for a US macroeconomic dataset. The unbalanced panel contains quarterly and monthly variables. The forecasting accuracy is measured against a set of benchmark models. We conclude that our dynamic factor state space analysis can lead to higher levels of forecasting precision when the panel size and time series dimensions are moderate.  相似文献   

14.
We evaluate the performances of various methods for forecasting tourism data. The data used include 366 monthly series, 427 quarterly series and 518 annual series, all supplied to us by either tourism bodies or academics who had used them in previous tourism forecasting studies. The forecasting methods implemented in the competition are univariate and multivariate time series approaches, and econometric models. This forecasting competition differs from previous competitions in several ways: (i) we concentrate on tourism data only; (ii) we include approaches with explanatory variables; (iii) we evaluate the forecast interval coverage as well as the point forecast accuracy; (iv) we observe the effect of temporal aggregation on the forecasting accuracy; and (v) we consider the mean absolute scaled error as an alternative forecasting accuracy measure. We find that pure time series approaches provide more accurate forecasts for tourism data than models with explanatory variables. For seasonal data we implement three fully automated pure time series algorithms that generate accurate point forecasts, and two of these also produce forecast coverage probabilities which are satisfactorily close to the nominal rates. For annual data we find that Naïve forecasts are hard to beat.  相似文献   

15.
General dynamic factor models have demonstrated their capacity to circumvent the curse of dimensionality in the analysis of high-dimensional time series and have been successfully considered in many economic and financial applications. As second-order models, however, they are sensitive to the presence of outliers—an issue that has not been analyzed so far in the general case of dynamic factors with possibly infinite-dimensional factor spaces (Forni et al. 2000, 2015, 2017). In this paper, we consider this robustness issue and study the impact of additive outliers on the identification, estimation, and forecasting performance of general dynamic factor models. Based on our findings, we propose robust versions of identification, estimation, and forecasting procedures. The finite-sample performance of our methods is evaluated via Monte Carlo experiments and successfully applied to a classical data set of 115 US macroeconomic and financial time series.  相似文献   

16.
Identifying the most appropriate time series model to achieve a good forecasting accuracy is a challenging task. We propose a novel algorithm that aims to mitigate the importance of model selection, while increasing the accuracy. Multiple time series are constructed from the original time series, using temporal aggregation. These derivative series highlight different aspects of the original data, as temporal aggregation helps in strengthening or attenuating the signals of different time series components. In each series, the appropriate exponential smoothing method is fitted and its respective time series components are forecast. Subsequently, the time series components from each aggregation level are combined, then used to construct the final forecast. This approach achieves a better estimation of the different time series components, through temporal aggregation, and reduces the importance of model selection through forecast combination. An empirical evaluation of the proposed framework demonstrates significant improvements in forecasting accuracy, especially for long-term forecasts.  相似文献   

17.
This article introduces the winning method at the M5 Accuracy competition. The presented method takes a simple manner of averaging the results of multiple base forecasting models that have been constructed via partial pooling of multi-level data. All base forecasting models of adopting direct or recursive multi-step forecasting methods are trained by the machine learning technique, LightGBM, from three different levels of data pools. At the competition, the simple averaging of the multiple direct and recursive forecasting models, called DRFAM, obtained the complementary effects between direct and recursive multi-step forecasting of the multi-level product sales to improve the accuracy and the robustness.  相似文献   

18.
Many models have been studied for forecasting the peak electric load, but studies focusing on forecasting peak electric load days for a billing period are scarce. This focus is highly relevant to consumers, as their electricity costs are determined based not only on total consumption, but also on the peak load required during a period. Forecasting these peak days accurately allows demand response actions to be planned and executed efficiently in order to mitigate these peaks and their associated costs. We propose a hybrid model based on ARIMA, logistic regression and artificial neural networks models. This hybrid model evaluates the individual results of these statistical and machine learning models in order to forecast whether a given day will be a peak load day for the billing period. The proposed model predicted 70% (40/57) of actual peak load days accurately and revealed potential savings of approximately USD $80,000 for an American university during a one-year testing period.  相似文献   

19.
The Global Energy Forecasting Competition 2017 (GEFCom2017) attracted more than 300 students and professionals from over 30 countries for solving hierarchical probabilistic load forecasting problems. Of the series of global energy forecasting competitions that have been held, GEFCom2017 is the most challenging one to date: the first one to have a qualifying match, the first one to use hierarchical data with more than two levels, the first one to allow the usage of external data sources, the first one to ask for real-time ex-ante forecasts, and the longest one. This paper introduces the qualifying and final matches of GEFCom2017, summarizes the top-ranked methods, publishes the data used in the competition, and presents several reflections on the competition series and a vision for future energy forecasting competitions.  相似文献   

20.
The recent housing market boom and bust in the United States illustrates that real estate returns are characterized by short-term positive serial correlation and long-term mean reversion to fundamental values. We develop an econometric model that includes these two components, but with weights that vary dynamically through time depending on recent forecasting performances. The smooth transition weighting mechanism can assign more weight to positive serial correlation in boom times, and more weight to reversal to fundamental values during downturns. We estimate the model with US national house price index data. In-sample, the switching mechanism significantly improves the fit of the model. In an out-of-sample forecasting assessment the model performs better than competing benchmark models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号