首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
The M5 competition follows the previous four M competitions, whose purpose is to learn from empirical evidence how to improve forecasting performance and advance the theory and practice of forecasting. M5 focused on a retail sales forecasting application with the objective to produce the most accurate point forecasts for 42,840 time series that represent the hierarchical unit sales of the largest retail company in the world, Walmart, as well as to provide the most accurate estimates of the uncertainty of these forecasts. Hence, the competition consisted of two parallel challenges, namely the Accuracy and Uncertainty forecasting competitions. M5 extended the results of the previous M competitions by: (a) significantly expanding the number of participating methods, especially those in the category of machine learning; (b) evaluating the performance of the uncertainty distribution along with point forecast accuracy; (c) including exogenous/explanatory variables in addition to the time series data; (d) using grouped, correlated time series; and (e) focusing on series that display intermittency. This paper describes the background, organization, and implementations of the competition, and it presents the data used and their characteristics. Consequently, it serves as introductory material to the results of the two forecasting challenges to facilitate their understanding.  相似文献   

2.
The M4 competition is the continuation of three previous competitions started more than 45 years ago whose purpose was to learn how to improve forecasting accuracy, and how such learning can be applied to advance the theory and practice of forecasting. The purpose of M4 was to replicate the results of the previous ones and extend them into three directions: First significantly increase the number of series, second include Machine Learning (ML) forecasting methods, and third evaluate both point forecasts and prediction intervals. The five major findings of the M4 Competitions are: 1. Out Of the 17 most accurate methods, 12 were “combinations” of mostly statistical approaches. 2. The biggest surprise was a “hybrid” approach that utilized both statistical and ML features. This method’s average sMAPE was close to 10% more accurate than the combination benchmark used to compare the submitted methods. 3. The second most accurate method was a combination of seven statistical methods and one ML one, with the weights for the averaging being calculated by a ML algorithm that was trained to minimize the forecasting. 4. The two most accurate methods also achieved an amazing success in specifying the 95% prediction intervals correctly. 5. The six pure ML methods performed poorly, with none of them being more accurate than the combination benchmark and only one being more accurate than Naïve2. This paper presents some initial results of M4, its major findings and a logical conclusion. Finally, it outlines what the authors consider to be the way forward for the field of forecasting.  相似文献   

3.
The M4 Competition: 100,000 time series and 61 forecasting methods   总被引:1,自引:0,他引:1  
The M4 Competition follows on from the three previous M competitions, the purpose of which was to learn from empirical evidence both how to improve the forecasting accuracy and how such learning could be used to advance the theory and practice of forecasting. The aim of M4 was to replicate and extend the three previous competitions by: (a) significantly increasing the number of series, (b) expanding the number of forecasting methods, and (c) including prediction intervals in the evaluation process as well as point forecasts. This paper covers all aspects of M4 in detail, including its organization and running, the presentation of its results, the top-performing methods overall and by categories, its major findings and their implications, and the computational requirements of the various methods. Finally, it summarizes its main conclusions and states the expectation that its series will become a testing ground for the evaluation of new methods and the improvement of the practice of forecasting, while also suggesting some ways forward for the field.  相似文献   

4.
Deep neural networks and gradient boosted tree models have swept across the field of machine learning over the past decade, producing across-the-board advances in performance. The ability of these methods to capture feature interactions and nonlinearities makes them exceptionally powerful and, at the same time, prone to overfitting, leakage, and a lack of generalization in domains with target non-stationarity and collinearity, such as time-series forecasting. We offer guidance to address these difficulties and provide a framework that maximizes the chances of predictions that generalize well and deliver state-of-the-art performance. The techniques we offer for cross-validation, augmentation, and parameter tuning have been used to win several major time-series forecasting competitions—including the M5 Forecasting Uncertainty competition and the Kaggle COVID19 Forecasting series—and, with the proper theoretical grounding, constitute the current best practices in time-series forecasting.  相似文献   

5.
Forecasting competitions are now so widespread that it is often forgotten how controversial they were when first held, and how influential they have been over the years. I briefly review the history of forecasting competitions, and discuss what we have learned about their design and implementation, and what they can tell us about forecasting. I also provide a few suggestions for potential future competitions, and for research about forecasting based on competitions.  相似文献   

6.
The M5 competition uncertainty track aims for probabilistic forecasting of sales of thousands of Walmart retail goods. We show that the M5 competition data face strong overdispersion and sporadic demand, especially zero demand. We discuss modeling issues concerning adequate probabilistic forecasting of such count data processes. Unfortunately, the majority of popular prediction methods used in the M5 competition (e.g. lightgbm and xgboost GBMs) fail to address the data characteristics, due to the considered objective functions. Distributional forecasting provides a suitable modeling approach to overcome those problems. The GAMLSS framework allows for flexible probabilistic forecasting using low-dimensional distributions. We illustrate how the GAMLSS approach can be applied to M5 competition data by modeling the location and scale parameters of various distributions, e.g. the negative binomial distribution. Finally, we discuss software packages for distributional modeling and their drawbacks, like the R package gamlss with its package extensions, and (deep) distributional forecasting libraries such as TensorFlow Probability.  相似文献   

7.
The M5 accuracy competition has presented a large-scale hierarchical forecasting problem in a realistic grocery retail setting in order to evaluate an extended range of forecasting methods, particularly those adopting machine learning. The top ranking solutions adopted a global bottom-up approach, by which is meant using global forecasting methods to generate bottom level forecasts in the hierarchy and then using a bottom-up strategy to obtain coherent forecasts for aggregate levels. However, whether the observed superior performance of the global bottom-up approach is robust over various test periods or only an accidental result, is an important question for retail forecasting researchers and practitioners. We conduct experiments to explore the robustness of the global bottom-up approach, and make comments on the efforts made by the top-ranking teams to improve the core approach. We find that the top-ranking global bottom-up approaches lack robustness across time periods in the M5 data. This inconsistent performance makes the M5 final rankings somewhat of a lottery. In future forecasting competitions, we suggest the use of multiple rolling test sets to evaluate the forecasting performance in order to reward robustly performing forecasting methods, a much needed characteristic in any application.  相似文献   

8.
We review the results of six forecasting competitions based on the online data science platform Kaggle, which have been largely overlooked by the forecasting community. In contrast to the M competitions, the competitions reviewed in this study feature daily and weekly time series with exogenous variables, business hierarchy information, or both. Furthermore, the Kaggle data sets all exhibit higher entropy than the M3 and M4 competitions, and they are intermittent.In this review, we confirm the conclusion of the M4 competition that ensemble models using cross-learning tend to outperform local time series models and that gradient boosted decision trees and neural networks are strong forecast methods. Moreover, we present insights regarding the use of external information and validation strategies, and discuss the impacts of data characteristics on the choice of statistics or machine learning methods. Based on these insights, we construct nine ex-ante hypotheses for the outcome of the M5 competition to allow empirical validation of our findings.  相似文献   

9.
Forecasters typically evaluate the performances of new forecasting methods by exploiting data from past forecasting competitions. Over the years, numerous studies have based their conclusions on such datasets, with mis-performing methods being unlikely to receive any further attention. However, it has been reported that these datasets might not be indicative, as they display many limitations. Since forecasting research is driven somewhat by data from forecasting competitions, it becomes vital to determine whether they are indeed representative of the reality or whether forecasters tend to over-fit their methods on a random sample of series. This paper uses the data from M4 as proportionate to the real world and compares its properties with those of past datasets commonly used in the literature as benchmarks in order to provide evidence on that question. The results show that many popular benchmarks of the past may indeed deviate from reality, and ways forward are discussed in response.  相似文献   

10.
Combination methods have performed well in time series forecast competitions. This study proposes a simple but general methodology for combining time series forecast methods. Weights are calculated using a cross-validation scheme that assigns greater weights to methods with more accurate in-sample predictions. The methodology was used to combine forecasts from the Theta, exponential smoothing, and ARIMA models, and placed fifth in the M4 Competition for both point and interval forecasting.  相似文献   

11.
This discussion reflects on the results of the M4 forecasting competition, and in particular, the impact of machine learning (ML) methods. Unlike the M3, which included only one ML method (an automatic artificial neural network that performed poorly), M4’s 49 participants included eight that used either pure ML approaches, or ML in conjunction with statistical methods. The six pure (or combination of pure) ML methods again fared poorly, with all of them falling below the Comb benchmark that combined three simple time series methods. However, utilizing ML either in combination with statistical methods (and for selecting weightings) or in a hybrid model with exponential smoothing not only exceeded the benchmark, but performed at the top. While these promising results by no means prove ML to be a panacea, they do challenge the notion that complex methods do not add value to the forecasting process.  相似文献   

12.
The M5 Forecasting Competition, the fifth in the series of forecasting competitions organized by Professor Spyros Makridakis and the Makridakis Open Forecasting Center at the University of Nicosia, was an extremely successful event. This competition focused on both the accuracy and uncertainty of forecasts and leveraged actual historical sales data provided by Walmart. This has led to the M5 being a unique competition that closely parallels the difficulties and challenges associated with industrial applications of forecasting. Like its precursor the M4, many interesting ideas came from the results of the M5 competition which will continue to push forecasting in new directions.In this article we discuss four topics around the practitioners view of the application of the competition and its results to the actual problems we face. First, we examine the data provided and how it relates to common difficulties practitioners must overcome. Secondly, we review the relevance of the accuracy and uncertainty metrics associated with the competition. Third, we discuss the leading solutions and their implications to forecasting at a company like Walmart. We then close with thoughts about a future M6 competition and further enhancements that can be explored.  相似文献   

13.
We participated in the M4 competition for time series forecasting and here describe our methods for forecasting daily time series. We used an ensemble of five statistical forecasting methods and a method that we refer to as the correlator. Our retrospective analysis using the ground truth values published by the M4 organisers after the competition demonstrates that the correlator was responsible for most of our gains over the naïve constant forecasting method. We identify data leakage as one reason for its success, due partly to test data selected from different time intervals, and partly to quality issues with the original time series. We suggest that future forecasting competitions should provide actual dates for the time series so that some of these leakages could be avoided by participants.  相似文献   

14.
One of the most significant differences of M5 over previous forecasting competitions is that it was held on Kaggle, an online platform for data scientists and machine learning practitioners. Kaggle provides a gathering place, or virtual community, for web users who are interested in the M5 competition. Users can share code, models, features, and loss functions through online notebooks and discussion forums. Here, we study the social influence of this virtual community on user behavior in the M5 competition. We first research the content of the M5 virtual community by topic modeling and trend analysis. Further, we perform social media analysis to identify the potential relationship network of the virtual community. We study the roles and characteristics of some key participants who promoted the diffusion of information within the M5 virtual community. Overall, this study provides in-depth insights into the mechanism of the virtual community’s influence on the participants and has potential implications for future online competitions.  相似文献   

15.
We present a simple quantile regression-based forecasting method that was applied in the probabilistic load forecasting framework of the Global Energy Forecasting Competition 2017 (GEFCom2017). The hourly load data are log transformed and split into a long-term trend component and a remainder term. The key forecasting element is the quantile regression approach for the remainder term, which takes into account both weekly and annual seasonalities, such as their interactions. Temperature information is used only for stabilizing the forecast of the long-term trend component. Information on public holidays is ignored. However, the forecasting method still placed second in the open data track and fourth in the definite data track, which is remarkable given the simplicity of the model. The method also outperforms the Vanilla benchmark consistently.  相似文献   

16.
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors.  相似文献   

17.
This paper considers two problems of interpreting forecasting competition error statistics. The first problem is concerned with the importance of linking the error measure (loss function) used in evaluating a forecasting model with the loss function used in estimating the model. It is argued that because the variety of uses of any single forecast, such matching is impractical. Secondly, there is little evidence that matching would have any impact on comparative forecast performance, however measured. As a consequence the results of forecasting competitions are not affected by this problem. The second problem is concerned with the interpreting performance, when evaluated through M(ean) S(quare) E(rror). The authors show that in the Makridakis Competition, good MSE performance is solely due to performance on a small number of the 1001 series, and arises because of the effects of scale. They conclude that comparisons of forecasting accuracy based on MSE are subject to major problems of interpretation.  相似文献   

18.
Forecasting competitions have been a major driver not only of improvements in forecasting methods’ performances, but also of the development of new forecasting approaches. However, despite the tremendous value and impact of these competitions, they do suffer from the limitation that performances are measured only in terms of the forecast accuracy and bias, ignoring utility metrics. Using the monthly industry series of the M3 competition, we empirically explore the inventory performances of various widely used forecasting techniques, including exponential smoothing, ARIMA models, the Theta method, and approaches based on multiple temporal aggregation. We employ a rolling simulation approach and analyse the results for the order-up-to policy under various lead times. We find that the methods that are based on combinations result in superior inventory performances, while the Naïve, Holt, and Holt-Winters methods perform poorly.  相似文献   

19.
Classifying forecasting methods as being either of a “machine learning” or “statistical” nature has become commonplace in parts of the forecasting literature and community, as exemplified by the M4 competition and the conclusion drawn by the organizers. We argue that this distinction does not stem from fundamental differences in the methods assigned to either class. Instead, this distinction is probably of a tribal nature, which limits the insights into the appropriateness and effectiveness of different forecasting methods. We provide alternative characteristics of forecasting methods which, in our view, allow to draw meaningful conclusions. Further, we discuss areas of forecasting which could benefit most from cross-pollination between the ML and the statistics communities.  相似文献   

20.
This commentary introduces a correlation analysis of the top-10 ranked forecasting methods that participated in the M4 forecasting competition. The “M” competitions attempt to promote and advance research in the field of forecasting by inviting both industry and academia to submit forecasting algorithms for evaluation over a large corpus of real-world datasets. After performing the initial analysis to derive the errors of each method, we proceed to investigate the pairwise correlations among them in order to understand the extent to which they produce errors in similar ways. Based on our results, we conclude that there is indeed a certain degree of correlation among the top-10 ranked methods, largely due to the fact that many of them consist of a combination of well-known, statistical and machine learning techniques. This fact has a strong impact on the results of the correlation analysis, and therefore leads to similar forecasting error patterns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号