首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 962 毫秒
1.
Local and state governments depend on small area population forecasts to make important decisions concerning the development of local infrastructure and services. Despite their importance, current methods often produce highly inaccurate forecasts. Recent years have witnessed promising developments in time series forecasting using Machine Learning across a wide range of social and economic variables. However, limited work has been undertaken to investigate the potential application of Machine Learning methods in demography, particularly for small area population forecasting. In this paper we describe the development of two Long-Short Term Memory network architectures for small area populations. We employ the Keras Tuner to select layer unit numbers, vary the window width of input data, and apply a double training and validation regime which supports work with short time series and prioritises later sequence values for forecasts. These methods are transferable and can be applied to other data sets. Retrospective small area population forecasts for Australia were created for the periods 2006–16 and 2011–16. Model performance was evaluated against actual data and two benchmark methods (LIN/EXP and CSP-VSG). We also evaluated the impact of constraining small area population forecasts to an independent national forecast. Forecast accuracy was influenced by jump-off year, constraining, area size, and remoteness. The LIN/EXP model was the best performing method for the 2011-based forecasts whilst deep learning methods performed best for the 2006-based forecasts, including significant improvements in the accuracy of 10 year forecasts. However, benchmark methods were consistently more accurate for more remote areas and for those with populations <5000.  相似文献   

2.
The M4 Competition: 100,000 time series and 61 forecasting methods   总被引:1,自引:0,他引:1  
The M4 Competition follows on from the three previous M competitions, the purpose of which was to learn from empirical evidence both how to improve the forecasting accuracy and how such learning could be used to advance the theory and practice of forecasting. The aim of M4 was to replicate and extend the three previous competitions by: (a) significantly increasing the number of series, (b) expanding the number of forecasting methods, and (c) including prediction intervals in the evaluation process as well as point forecasts. This paper covers all aspects of M4 in detail, including its organization and running, the presentation of its results, the top-performing methods overall and by categories, its major findings and their implications, and the computational requirements of the various methods. Finally, it summarizes its main conclusions and states the expectation that its series will become a testing ground for the evaluation of new methods and the improvement of the practice of forecasting, while also suggesting some ways forward for the field.  相似文献   

3.
Forecast Pro forecasted the weekly series in the M4 competition more accurately than all other entrants. Our approach was to follow the same forecasting process that we recommend to our users. This approach involves determining the Key Performance Metric (KPI), establishing baseline forecasts using our automated expert selection algorithm, reviewing those baseline forecasts and customizing forecasts where needed. This article explores why this approach worked well for weekly data, discusses the applicability of the M4 competition to business forecasting and proposes some potential improvements for future competitions to make them more relevant to business forecasting.  相似文献   

4.
We compare a number of methods that have been proposed in the literature for obtaining h-step ahead minimum mean square error forecasts for self-exciting threshold autoregressive (SETAR) models. These forecasts are compared to those from an AR model. The comparison of forecasting methods is made using Monte Carlo simulation. The Monte-Carlo method of calculating SETAR forecasts is generally at least as good as that of the other methods we consider. An exception is when the disturbances in the SETAR model come from a highly asymmetric distribution, when a Bootstrap method is to be preferred.An empirical application calculates multi-period forecasts from a SETAR model of US gross national product using a number of the forecasting methods. We find that whether there are improvements in forecast performance relative to a linear AR model depends on the historical epoch we select, and whether forecasts are evaluated conditional on the regime the process was in at the time the forecast was made.  相似文献   

5.
The M4 competition is the continuation of three previous competitions started more than 45 years ago whose purpose was to learn how to improve forecasting accuracy, and how such learning can be applied to advance the theory and practice of forecasting. The purpose of M4 was to replicate the results of the previous ones and extend them into three directions: First significantly increase the number of series, second include Machine Learning (ML) forecasting methods, and third evaluate both point forecasts and prediction intervals. The five major findings of the M4 Competitions are: 1. Out Of the 17 most accurate methods, 12 were “combinations” of mostly statistical approaches. 2. The biggest surprise was a “hybrid” approach that utilized both statistical and ML features. This method’s average sMAPE was close to 10% more accurate than the combination benchmark used to compare the submitted methods. 3. The second most accurate method was a combination of seven statistical methods and one ML one, with the weights for the averaging being calculated by a ML algorithm that was trained to minimize the forecasting. 4. The two most accurate methods also achieved an amazing success in specifying the 95% prediction intervals correctly. 5. The six pure ML methods performed poorly, with none of them being more accurate than the combination benchmark and only one being more accurate than Naïve2. This paper presents some initial results of M4, its major findings and a logical conclusion. Finally, it outlines what the authors consider to be the way forward for the field of forecasting.  相似文献   

6.
Combination methods have performed well in time series forecast competitions. This study proposes a simple but general methodology for combining time series forecast methods. Weights are calculated using a cross-validation scheme that assigns greater weights to methods with more accurate in-sample predictions. The methodology was used to combine forecasts from the Theta, exponential smoothing, and ARIMA models, and placed fifth in the M4 Competition for both point and interval forecasting.  相似文献   

7.
This paper studies performance of factor-based forecasts using differenced and nondifferenced data. Approximate variances of forecasting errors from the two forecasts are derived and compared. It is reported that the forecast using nondifferenced data tends to be more accurate than that using differenced data. This paper conducts simulations to compare root mean squared forecasting errors of the two competing forecasts. Simulation results indicate that forecasting using nondifferenced data performs better. The advantage of using nondifferenced data is more pronounced when a forecasting horizon is long and the number of factors is large. This paper applies the two competing forecasting methods to 68 I(1) monthly US macroeconomic variables across a range of forecasting horizons and sampling periods. We also provide detailed forecasting analysis on US inflation and industrial production. We find that forecasts using nondifferenced data tend to outperform those using differenced data.  相似文献   

8.
The M5 accuracy competition has presented a large-scale hierarchical forecasting problem in a realistic grocery retail setting in order to evaluate an extended range of forecasting methods, particularly those adopting machine learning. The top ranking solutions adopted a global bottom-up approach, by which is meant using global forecasting methods to generate bottom level forecasts in the hierarchy and then using a bottom-up strategy to obtain coherent forecasts for aggregate levels. However, whether the observed superior performance of the global bottom-up approach is robust over various test periods or only an accidental result, is an important question for retail forecasting researchers and practitioners. We conduct experiments to explore the robustness of the global bottom-up approach, and make comments on the efforts made by the top-ranking teams to improve the core approach. We find that the top-ranking global bottom-up approaches lack robustness across time periods in the M5 data. This inconsistent performance makes the M5 final rankings somewhat of a lottery. In future forecasting competitions, we suggest the use of multiple rolling test sets to evaluate the forecasting performance in order to reward robustly performing forecasting methods, a much needed characteristic in any application.  相似文献   

9.
The M5 competition follows the previous four M competitions, whose purpose is to learn from empirical evidence how to improve forecasting performance and advance the theory and practice of forecasting. M5 focused on a retail sales forecasting application with the objective to produce the most accurate point forecasts for 42,840 time series that represent the hierarchical unit sales of the largest retail company in the world, Walmart, as well as to provide the most accurate estimates of the uncertainty of these forecasts. Hence, the competition consisted of two parallel challenges, namely the Accuracy and Uncertainty forecasting competitions. M5 extended the results of the previous M competitions by: (a) significantly expanding the number of participating methods, especially those in the category of machine learning; (b) evaluating the performance of the uncertainty distribution along with point forecast accuracy; (c) including exogenous/explanatory variables in addition to the time series data; (d) using grouped, correlated time series; and (e) focusing on series that display intermittency. This paper describes the background, organization, and implementations of the competition, and it presents the data used and their characteristics. Consequently, it serves as introductory material to the results of the two forecasting challenges to facilitate their understanding.  相似文献   

10.
Probabilistic forecasting, i.e., estimating a time series’ future probability distribution given its past, is a key enabler for optimizing business processes. In retail businesses, for example, probabilistic demand forecasts are crucial for having the right inventory available at the right time and in the right place. This paper proposes DeepAR, a methodology for producing accurate probabilistic forecasts, based on training an autoregressive recurrent neural network model on a large number of related time series. We demonstrate how the application of deep learning techniques to forecasting can overcome many of the challenges that are faced by widely-used classical approaches to the problem. By means of extensive empirical evaluations on several real-world forecasting datasets, we show that our methodology produces more accurate forecasts than other state-of-the-art methods, while requiring minimal manual work.  相似文献   

11.
Several researchers (Armstrong, 2001; Clemen, 1989; Makridakis and Winkler, 1983) have shown empirically that combination-based forecasting methods are very effective in real world settings. This paper discusses a combination-based forecasting approach that was used successfully in the M4 competition. The proposed approach was evaluated on a set of 100K time series across multiple domain areas with varied frequencies. The point forecasts submitted finished fourth based on the overall weighted average (OWA) error measure and second based on the symmetric mean absolute percent error (sMAPE).  相似文献   

12.
In this paper, we define forecast (in)stability in terms of the variability in forecasts for a specific time period caused by updating the forecast for this time period when new observations become available, i.e., as time passes. We propose an extension to the state-of-the-art N-BEATS deep learning architecture for the univariate time series point forecasting problem. The extension allows us to optimize forecasts from both a traditional forecast accuracy perspective as well as a forecast stability perspective. We show that the proposed extension results in forecasts that are more stable without leading to a deterioration in forecast accuracy for the M3 and M4 data sets. Moreover, our experimental study shows that it is possible to improve both forecast accuracy and stability compared to the original N-BEATS architecture, indicating that including a forecast instability component in the loss function can be used as regularization mechanism.  相似文献   

13.
This paper analyses the real-time forecasting performance of the New Keynesian DSGE model of Galí, Smets and Wouters (2012), estimated on euro area data. It investigates the extent to which the inclusion of forecasts of inflation, GDP growth and unemployment by professional forecasters improve the forecasting performance. We consider two approaches for conditioning on such information. Under the “noise” approach, the mean professional forecasts are assumed to be noisy indicators of the rational expectations forecasts implied by the DSGE model. Under the “news” approach, it is assumed that the forecasts reveal the presence of expected future structural shocks in line with those estimated in the past. The forecasts of the DSGE model are compared with those from a Bayesian VAR model, an AR(1) model, a sample mean and a random walk.  相似文献   

14.
We assess the marginal predictive content of a large international dataset for forecasting GDP in New Zealand, an archetypal small open economy. We apply “data-rich” factor and shrinkage methods to efficiently handle hundreds of predictor series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. We find that exploiting a large international dataset can improve forecasts relative to data-rich approaches based on a large national dataset only, and also relative to more traditional approaches based on small datasets. This is in spite of New Zealand’s business and consumer confidence and expectations data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from including international predictors are at longer forecast horizons. The forecasting performance achievable with the data-rich methods differs widely, with shrinkage methods and partial least squares performing best in handling the international data.  相似文献   

15.
The purpose of this study is to investigate the efficacy of combining forecasting models in order to improve earnings per share forecasts. The utility industry is used because regulation causes the accounting procedures of the firms to be more homogeneous than other industries. Three types of forecasting models which use historical data are compared to the forecasts of the Value Line Investment Survey. It is found that predictions of the analysts of Value Line are more accurate than the predictions of the models which use only historical data. However the study also shows that forecasts of earnings per share can be improved by combining the predictions of Value Line with the predictions of other models. Specifically, the forecast error is the least when the Value Line forecast is combined with the forecast of the Brown-Rozeff ARIMA model.  相似文献   

16.
The scientific method consists of making hypotheses or predictions and then carrying out experiments to test them once the actual results have become available, in order to learn from both successes and mistakes. This approach was followed in the M4 competition with positive results and has been repeated in the M5, with its organizers submitting their ten predictions/hypotheses about its expected results five days before its launch. The present paper presents these predictions/hypotheses and evaluates their realization according to the actual findings of the competition. The results indicate that well-established practices, like combining forecasts, exploiting explanatory variables, and capturing seasonality and special days, remain critical for enhancing forecasting performance, re-confirming also that relatively new approaches, like cross-learning algorithms and machine learning methods, display great potential. Yet, we show that simple, local statistical methods may still be competitive for forecasting high granularity data and estimating the tails of the uncertainty distribution, thus motivating future research in the field of retail sales forecasting.  相似文献   

17.
"The use of the Box-Jenkins approach for forecasting the population of the United States up to the year 2080 is discussed. It is shown that the Box-Jenkins approach is equivalent to a simple trend model when making long-range predictions for the United States. An investigation of forecasting accuracy indicates that the Box-Jenkins method produces population forecasts that are at least as reliable as those done with more traditional demographic methods."  相似文献   

18.
19.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

20.
Computer-based demand forecasting systems have been widely adopted in supply chain companies, but little research has studied how these systems are actually used in the forecasting process. We report the findings of a case study of demand forecasting in a pharmaceutical company over a 15-year period. At the start of the study, managers believed that they were making extensive use of their forecasting system that was marketed based on the accuracy of its advanced statistical methods. Yet most forecasts were obtained using the system’s facility for judgmentally overriding the automatic statistical forecasts. Carrying out the judgmental interventions involved considerable management effort as part of a sales & operations planning (S&OP) process, yet these often only served to reduce forecast accuracy. This study uses observations of the forecasting process, interviews with participants and data on the accuracy of forecasts to investigate why the managers continued to use non-normative forecasting practices for many years despite the potential economic benefits that could be achieved through change. The reasons for the longevity of these practices are examined both from the perspective of the individual forecaster and the organization as a whole.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号