首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Forecasting with gradient boosted trees: augmentation,tuning, and cross-validation strategies: Winning solution to the M5 Uncertainty competition
Abstract:Deep neural networks and gradient boosted tree models have swept across the field of machine learning over the past decade, producing across-the-board advances in performance. The ability of these methods to capture feature interactions and nonlinearities makes them exceptionally powerful and, at the same time, prone to overfitting, leakage, and a lack of generalization in domains with target non-stationarity and collinearity, such as time-series forecasting. We offer guidance to address these difficulties and provide a framework that maximizes the chances of predictions that generalize well and deliver state-of-the-art performance. The techniques we offer for cross-validation, augmentation, and parameter tuning have been used to win several major time-series forecasting competitions—including the M5 Forecasting Uncertainty competition and the Kaggle COVID19 Forecasting series—and, with the proper theoretical grounding, constitute the current best practices in time-series forecasting.
Keywords:Gradient boosted trees  Neural networks  Purged k-fold cross-validation  Feature engineering  Forecasting competitions  M competitions  Uncertainty  Probabilistic forecasts  Time series  Machine learning  Retail sales forecasting  Time-series forecasting
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号