首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The ‘M4’ forecasting competition results were featured recently in a special issue of the International Journal of Forecasting and included projections for demographic time series. We sought to investigate whether the best M4 methods could improve the accuracy of small area population forecasts, which generally suffer from much higher forecast errors than regions with larger populations. The aim of this study was to apply the top ten M4 forecasting methods to produce 5- and 10-year forecasts of small area total populations using historical datasets from Australia and New Zealand. Forecasts were compared against the actual population numbers and forecasts from two simple benchmark models. The M4 methods were found to perform relatively well compared to our benchmarks. In the light of these findings, we discuss possible future directions for small area population forecasting research.  相似文献   

2.
The M4 competition is the continuation of three previous competitions started more than 45 years ago whose purpose was to learn how to improve forecasting accuracy, and how such learning can be applied to advance the theory and practice of forecasting. The purpose of M4 was to replicate the results of the previous ones and extend them into three directions: First significantly increase the number of series, second include Machine Learning (ML) forecasting methods, and third evaluate both point forecasts and prediction intervals. The five major findings of the M4 Competitions are: 1. Out Of the 17 most accurate methods, 12 were “combinations” of mostly statistical approaches. 2. The biggest surprise was a “hybrid” approach that utilized both statistical and ML features. This method’s average sMAPE was close to 10% more accurate than the combination benchmark used to compare the submitted methods. 3. The second most accurate method was a combination of seven statistical methods and one ML one, with the weights for the averaging being calculated by a ML algorithm that was trained to minimize the forecasting. 4. The two most accurate methods also achieved an amazing success in specifying the 95% prediction intervals correctly. 5. The six pure ML methods performed poorly, with none of them being more accurate than the combination benchmark and only one being more accurate than Naïve2. This paper presents some initial results of M4, its major findings and a logical conclusion. Finally, it outlines what the authors consider to be the way forward for the field of forecasting.  相似文献   

3.
A survey of models used for forecasting exchange rates and inflation reveals that the factor‐based and time‐varying parameter or state space models generate superior forecasts relative to all other models. This survey also finds that models based on Taylor rule and portfolio balance theory have moderate predictive power for forecasting exchange rates. The evidence on the use of Bayesian Model Averaging approach in forecasting exchange rates reveals limited predictive power, but strong support for forecasting inflation. Overall, the evidence overwhelmingly points to the context of the forecasts, relevance of the historical data, data transformation, choice of the benchmark, selected time horizons, sample period and forecast evaluation methods as the crucial elements in selecting forecasting models for exchange rate and inflation.  相似文献   

4.
We test the predictive accuracy of forecasts of the number of COVID-19 fatalities produced by several forecasting teams and collected by the United States Centers for Disease Control and Prevention for the epidemic in the United States. We find three main results. First, at the short horizon (1 week ahead) no forecasting team outperforms a simple time-series benchmark. Second, at longer horizons (3 and 4 week ahead) forecasters are more successful and sometimes outperform the benchmark. Third, one of the best performing forecasts is the Ensemble forecast, that combines all available predictions using uniform weights. In view of these results, collecting a wide range of forecasts and combining them in an ensemble forecast may be a superior approach for health authorities, rather than relying on a small number of forecasts.  相似文献   

5.
"The use of the Box-Jenkins approach for forecasting the population of the United States up to the year 2080 is discussed. It is shown that the Box-Jenkins approach is equivalent to a simple trend model when making long-range predictions for the United States. An investigation of forecasting accuracy indicates that the Box-Jenkins method produces population forecasts that are at least as reliable as those done with more traditional demographic methods."  相似文献   

6.
Policymakers need to know whether prediction is possible and, if so, whether any proposed forecasting method will provide forecasts that are substantially more accurate than those from the relevant benchmark method. An inspection of global temperature data suggests that temperature is subject to irregular variations on all relevant time scales, and that variations during the late 1900s were not unusual. In such a situation, a “no change” extrapolation is an appropriate benchmark forecasting method. We used the UK Met Office Hadley Centre’s annual average thermometer data from 1850 through 2007 to examine the performance of the benchmark method. The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. For example, mean absolute errors for the 20- and 50-year horizons were 0.18  C and 0.24  C respectively. We nevertheless demonstrate the use of benchmarking with the example of the Intergovernmental Panel on Climate Change’s 1992 linear projection of long-term warming at a rate of 0.03  C per year. The small sample of errors from ex ante projections at 0.03  C per year for 1992 through 2008 was practically indistinguishable from the benchmark errors. Validation for long-term forecasting, however, requires a much longer horizon. Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth—the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions.  相似文献   

7.
We assess the marginal predictive content of a large international dataset for forecasting GDP in New Zealand, an archetypal small open economy. We apply “data-rich” factor and shrinkage methods to efficiently handle hundreds of predictor series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. We find that exploiting a large international dataset can improve forecasts relative to data-rich approaches based on a large national dataset only, and also relative to more traditional approaches based on small datasets. This is in spite of New Zealand’s business and consumer confidence and expectations data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from including international predictors are at longer forecast horizons. The forecasting performance achievable with the data-rich methods differs widely, with shrinkage methods and partial least squares performing best in handling the international data.  相似文献   

8.
We analyze periodic and seasonal cointegration models for bivariate quarterly observed time series in an empirical forecasting study. We include both single equation and multiple equation methods for those two classes of models. A VAR model in first differences, with and without cointegration restrictions, and a VAR model in annual differences are also included in the analysis, where they serve as benchmark models. Our empirical results indicate that the VAR model in first differences without cointegration is best if one-step ahead forecasts are considered. For longer forecast horizons however, the VAR model in annual differences is better. When comparing periodic versus seasonal cointegration models, we find that the seasonal cointegration models tend to yield better forecasts. Finally, there is no clear indication that multiple equations methods improve on single equation methods.  相似文献   

9.
We develop a new dynamic multivariate model for the analysis and forecasting of football match results in national league competitions. The proposed dynamic model is based on the score of the predictive observation mass function for a high-dimensional panel of weekly match results. Our main interest is in forecasting whether the match result is a win, a loss or a draw for each team. The dynamic model for delivering such forecasts can be based on three different dependent variables: the pairwise count of the number of goals, the difference between the numbers of goals, or the category of the match result (win, loss, draw). The different dependent variables require different distributional assumptions. Furthermore, different dynamic model specifications can be considered for generating the forecasts. We investigate empirically which dependent variable and which dynamic model specification yield the best forecasting results. We validate the precision of the resulting forecasts and the success of the forecasts in a betting simulation in an extensive forecasting study for match results from six large European football competitions. Finally, we conclude that the dynamic model for pairwise counts delivers the most precise forecasts while the dynamic model for the difference between counts is most successful for betting, but that both outperform benchmark and other competing models.  相似文献   

10.
In a data-rich environment, forecasting economic variables amounts to extracting and organizing useful information from a large number of predictors. So far, the dynamic factor model and its variants have been the most successful models for such exercises. In this paper, we investigate a category of LASSO-based approaches and evaluate their predictive abilities for forecasting twenty important macroeconomic variables. These alternative models can handle hundreds of data series simultaneously, and extract useful information for forecasting. We also show, both analytically and empirically, that combing forecasts from LASSO-based models with those from dynamic factor models can reduce the mean square forecast error (MSFE) further. Our three main findings can be summarized as follows. First, for most of the variables under investigation, all of the LASSO-based models outperform dynamic factor models in the out-of-sample forecast evaluations. Second, by extracting information and formulating predictors at economically meaningful block levels, the new methods greatly enhance the interpretability of the models. Third, once forecasts from a LASSO-based approach are combined with those from a dynamic factor model by forecast combination techniques, the combined forecasts are significantly better than either dynamic factor model forecasts or the naïve random walk benchmark.  相似文献   

11.
This paper evaluates the forecasting performances of several small open-economy DSGE models relative to a closed-economy benchmark using a long span of data for Australia, Canada and the United Kingdom. We find that opening the model economy usually does not improve the quality of point and density forecasts for key domestic variables, and can even cause it to deteriorate. We show that this result can be attributed largely to an increase in the forecast error due to the more sophisticated structure of the extended setup, which is not compensated for by a better model specification. This claim is based on a Monte Carlo experiment in which an open-economy model fails to beat its closed-economy benchmark consistently even if the former is the true data generating process.  相似文献   

12.
This paper proposes a three-step approach to forecasting time series of electricity consumption at different levels of household aggregation. These series are linked by hierarchical constraints—global consumption is the sum of regional consumption, for example. First, benchmark forecasts are generated for all series using generalized additive models. Second, for each series, the aggregation algorithm ML-Poly, introduced by Gaillard, Stoltz, and van Erven in 2014, finds an optimal linear combination of the benchmarks. Finally, the forecasts are projected onto a coherent subspace to ensure that the final forecasts satisfy the hierarchical constraints. By minimizing a regret criterion, we show that the aggregation and projection steps improve the root mean square error of the forecasts. Our approach is tested on household electricity consumption data; experimental results suggest that successive aggregation and projection steps improve the benchmark forecasts at different levels of household aggregation.  相似文献   

13.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

14.
This paper examines the use of sparse methods to forecast the real (in the chain-linked volume sense) expenditure components of the US and EU GDP in the short-run sooner than national statistics institutions officially release the data. We estimate current-quarter nowcasts, along with one- and two-quarter forecasts, by bridging quarterly data with available monthly information announced with a much smaller delay. We solve the high-dimensionality problem of monthly datasets by assuming sparse structures of leading indicators capable of adequately explaining the dynamics of the analyzed data. For variable selection and estimation of the forecasts, we use LASSO together with its recent modifications. We propose an adjustment that combines LASSO cases with principal components analysis to improve the forecasting performance. We evaluated the forecasting performance by conducting pseudo-real-time experiments for gross fixed capital formation, private consumption, imports, and exports over a sample from 2005–2019, compared with benchmark ARMA and factor models. The main results suggest that sparse methods can outperform the benchmarks and identify reasonable subsets of explanatory variables. The proposed combination of LASSO and principal components further improves the forecast accuracy.  相似文献   

15.
In this paper, we assess the possibility of producing unbiased forecasts for fiscal variables in the Euro area by comparing a set of procedures that rely on different information sets and econometric techniques. In particular, we consider autoregressive moving average models, Vector autoregressions, small‐scale semistructural models at the national and Euro area level, institutional forecasts (Organization for Economic Co‐operation and Development), and pooling. Our small‐scale models are characterized by the joint modelling of fiscal and monetary policy using simple rules, combined with equations for the evolution of all the relevant fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on the basis of their forecasting performance using the mean square and mean absolute error criteria at different horizons. Overall, simple time‐series methods and pooling work well and are able to deliver unbiased forecasts, or slightly upward‐biased forecast for the debt–GDP dynamics. This result is mostly due to the short sample available, the robustness of simple methods to structural breaks, and to the difficulty of modelling the joint behaviour of several variables in a period of substantial institutional and economic changes. A bootstrap experiment highlights that, even when the data are generated using the estimated small‐scale multi‐country model, simple time‐series models can produce more accurate forecasts, because of their parsimonious specification.  相似文献   

16.
Exchange rate forecasting is hard and the seminal result of Meese and Rogoff [Meese, R., Rogoff, K., 1983. Empirical exchange rate models of the seventies: Do they fit out of sample? Journal of International Economics 14, 3–24] that the exchange rate is well approximated by a driftless random walk, at least for prediction purposes, still stands despite much effort at constructing other forecasting models. However, in several other macro and financial forecasting applications, researchers in recent years have considered methods for forecasting that effectively combine the information in a large number of time series. In this paper, I apply one such method for pooling forecasts from several different models, Bayesian Model Averaging, to the problem of pseudo out-of-sample exchange rate predictions. For most currency–horizon pairs, the Bayesian Model Averaging forecasts using a sufficiently high degree of shrinkage, give slightly smaller out-of-sample mean square prediction error than the random walk benchmark. The forecasts generated by this model averaging methodology are however very close to, but not identical to, those from the random walk forecast.  相似文献   

17.
This paper studies performance of factor-based forecasts using differenced and nondifferenced data. Approximate variances of forecasting errors from the two forecasts are derived and compared. It is reported that the forecast using nondifferenced data tends to be more accurate than that using differenced data. This paper conducts simulations to compare root mean squared forecasting errors of the two competing forecasts. Simulation results indicate that forecasting using nondifferenced data performs better. The advantage of using nondifferenced data is more pronounced when a forecasting horizon is long and the number of factors is large. This paper applies the two competing forecasting methods to 68 I(1) monthly US macroeconomic variables across a range of forecasting horizons and sampling periods. We also provide detailed forecasting analysis on US inflation and industrial production. We find that forecasts using nondifferenced data tend to outperform those using differenced data.  相似文献   

18.
The accuracy of population forecasts depends in part upon the method chosen for forecasting the vital rates of fertility, mortality, and migration. Methods for handling the stochastic propagation of error calculations in demographic forecasting are hard to do precisely. This paper discusses this obstacle in stochastic cohort-component population forecasts. The uncertainty of forecasts is due to uncertain estimates of the jump-off population and to errors in the forecasts of the vital rates. Empirically based of each source are presented and propagated through a simplified analytical model of population growth that allows assessment of the role of each component in the total error. Numerical estimates based on the errors of an actual vector ARIMA forecast of the US female population. These results broadly agree with those of the analytical model. This work especially uncertainty in the fertility forecasts to be so much higher than that in the other sources that the latter can be ignored in the propagation of error calculations for those cohorts that are born after the jump-off year of the forecast. A methodology is therefore presented which far simplifies the propagation of error calculations. It is noted, however, that the uncertainty of the jump-off population, migration, and mortality in the propagation of error for those alive at the jump-off time of the forecast must still be considered.  相似文献   

19.
When forecasting time series in a hierarchical configuration, it is necessary to ensure that the forecasts reconcile at all levels. The 2017 Global Energy Forecasting Competition (GEFCom2017) focused on addressing this topic. Quantile forecasts for eight zones and two aggregated zones in New England were required for every hour of a future month. This paper presents a new methodology for forecasting quantiles in a hierarchy which outperforms a commonly-used benchmark model. A simulation-based approach was used to generate demand forecasts. Adjustments were made to each of the demand simulations to ensure that all zonal forecasts reconciled appropriately, and a weighted reconciliation approach was implemented to ensure that the bottom-level zonal forecasts summed correctly to the aggregated zonal forecasts. We show that reconciling in this manner improves the forecast accuracy. A discussion of the results and modelling performances is presented, and brief reviews of hierarchical time series forecasting and gradient boosting are also included.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号