首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
The well-developed ETS (ExponenTial Smoothing, or Error, Trend, Seasonality) method incorporates a family of exponential smoothing models in state space representation and is widely used for automatic forecasting. The existing ETS method uses information criteria for model selection by choosing an optimal model with the smallest information criterion among all models fitted to a given time series. The ETS method under such a model selection scheme suffers from computational complexity when applied to large-scale time series data. To tackle this issue, we propose an efficient approach to ETS model selection by training classifiers on simulated data to predict appropriate model component forms for a given time series. We provide a simulation study to show the model selection ability of the proposed approach on simulated data. We evaluate our approach on the widely used M4 forecasting competition dataset in terms of both point forecasts and prediction intervals. To demonstrate the practical value of our method, we showcase the performance improvements from our approach on a monthly hospital dataset.  相似文献   

2.
This paper studies the predictability of cryptocurrency time series. We compare several alternative univariate and multivariate models for point and density forecasting of four of the most capitalized series: Bitcoin, Litecoin, Ripple and Ethereum. We apply a set of crypto-predictors and rely on dynamic model averaging to combine a large set of univariate dynamic linear models and several multivariate vector autoregressive models with different forms of time variation. We find statistically significant improvements in point forecasting when using combinations of univariate models, and in density forecasting when relying on the selection of multivariate models. Both schemes deliver sizable directional predictability.  相似文献   

3.
The relative performances of forecasting models change over time. This empirical observation raises two questions. First, is the relative performance itself predictable? Second, if so, can it be exploited in order to improve the forecast accuracy? We address these questions by evaluating the predictive abilities of a wide range of economic variables for two key US macroeconomic aggregates, namely industrial production and inflation, relative to simple benchmarks. We find that business cycle indicators, financial conditions, uncertainty and measures of past relative performances are generally useful for explaining the models’ relative forecasting performances. In addition, we conduct a pseudo-real-time forecasting exercise, where we use the information about the conditional performance for model selection and model averaging. The newly proposed strategies deliver sizable improvements over competitive benchmark models and commonly-used combination schemes. The gains are larger when model selection and averaging are based on both financial conditions and past performances measured at the forecast origin date.  相似文献   

4.
Global forecasting models (GFMs) that are trained across a set of multiple time series have shown superior results in many forecasting competitions and real-world applications compared with univariate forecasting approaches. One aspect of the popularity of statistical forecasting models such as ETS and ARIMA is their relative simplicity and interpretability (in terms of relevant lags, trend, seasonality, and other attributes), while GFMs typically lack interpretability, especially relating to particular time series. This reduces the trust and confidence of stakeholders when making decisions based on the forecasts without being able to understand the predictions. To mitigate this problem, we propose a novel local model-agnostic interpretability approach to explain the forecasts from GFMs. We train simpler univariate surrogate models that are considered interpretable (e.g., ETS) on the predictions of the GFM on samples within a neighbourhood that we obtain through bootstrapping, or straightforwardly as the one-step-ahead global black-box model forecasts of the time series which needs to be explained. After, we evaluate the explanations for the forecasts of the global models in both qualitative and quantitative aspects such as accuracy, fidelity, stability, and comprehensibility, and are able to show the benefits of our approach.  相似文献   

5.
This paper introduces a novel meta-learning algorithm for time series forecast model performance prediction. We model the forecast error as a function of time series features calculated from historical time series with an efficient Bayesian multivariate surface regression approach. The minimum predicted forecast error is then used to identify an individual model or a combination of models to produce the final forecasts. It is well known that the performance of most meta-learning models depends on the representativeness of the reference dataset used for training. In such circumstances, we augment the reference dataset with a feature-based time series simulation approach, namely GRATIS, to generate a rich and representative time series collection. The proposed framework is tested using the M4 competition data and is compared against commonly used forecasting approaches. Our approach provides comparable performance to other model selection and combination approaches but at a lower computational cost and a higher degree of interpretability, which is important for supporting decisions. We also provide useful insights regarding which forecasting models are expected to work better for particular types of time series, the intrinsic mechanisms of the meta-learners, and how the forecasting performance is affected by various factors.  相似文献   

6.
In this work, we propose a novel framework for density forecast combination by constructing time-varying weights based on time-varying features. Our framework estimates weights in the forecast combination via Bayesian log predictive scores, in which the optimal forecast combination is determined by time series features from historical information. In particular, we use an automatic Bayesian variable selection method to identify the importance of different features. To this end, our approach has better interpretability compared to other black-box forecasting combination schemes. We apply our framework to stock market data and M3 competition data. Based on our structure, a simple maximum-a-posteriori scheme outperforms benchmark methods, and Bayesian variable selection can further enhance the accuracy for both point forecasts and density forecasts.  相似文献   

7.
We explore a new approach to the forecasting of macroeconomic variables based on a dynamic factor state space analysis. Key economic variables are modeled jointly with principal components from a large time series panel of macroeconomic indicators using a multivariate unobserved components time series model. When the key economic variables are observed at a low frequency and the panel of macroeconomic variables is at a high frequency, we can use our approach for both nowcasting and forecasting purposes. Given a dynamic factor model as the data generation process, we provide Monte Carlo evidence of the finite-sample justification of our parsimonious and feasible approach. We also provide empirical evidence for a US macroeconomic dataset. The unbalanced panel contains quarterly and monthly variables. The forecasting accuracy is measured against a set of benchmark models. We conclude that our dynamic factor state space analysis can lead to higher levels of forecasting precision when the panel size and time series dimensions are moderate.  相似文献   

8.
We develop an iterative and efficient information-theoretic estimator for forecasting interval-valued data, and use our estimator to forecast the SP500 returns up to five days ahead using moving windows. Our forecasts are based on 13 years of data. We show that our estimator is superior to its competitors under all of the common criteria that are used to evaluate forecasts of interval data. Our approach differs from other methods that are used to forecast interval data in two major ways. First, rather than applying the more traditional methods that use only certain moments of the intervals in the estimation process, our estimator uses the complete sample information. Second, our method simultaneously selects the model (or models) and infers the model’s parameters. It is an iterative approach that imposes minimal structure and statistical assumptions.  相似文献   

9.
Many businesses and industries require accurate forecasts for weekly time series nowadays. However, the forecasting literature does not currently provide easy-to-use, automatic, reproducible and accurate approaches dedicated to this task. We propose a forecasting method in this domain to fill this gap, leveraging state-of-the-art forecasting techniques, such as forecast combination, meta-learning, and global modelling. We consider different meta-learning architectures, algorithms, and base model pools. Based on all considered model variants, we propose to use a stacking approach with lasso regression which optimally combines the forecasts of four base models: a global Recurrent Neural Network (RNN) model, Theta, Trigonometric Box–Cox ARMA Trend Seasonal (TBATS), and Dynamic Harmonic Regression ARIMA (DHR-ARIMA), as it shows the overall best performance across seven experimental weekly datasets on four evaluation metrics. Our proposed method also consistently outperforms a set of benchmarks and state-of-the-art weekly forecasting models by a considerable margin with statistical significance. Our method can produce the most accurate forecasts, in terms of mean sMAPE, for the M4 weekly dataset among all benchmarks and all original competition participants.  相似文献   

10.
The increasing penetration of intermittent renewable energy in power systems brings operational challenges. One way of supporting them is by enhancing the predictability of renewables through accurate forecasting. Convolutional Neural Networks (Convnets) provide a successful technique for processing space-structured multi-dimensional data. In our work, we propose the U-Convolutional model to predict hourly wind speeds for a single location using spatio-temporal data with multiple explanatory variables as an input. The U-Convolutional model is composed of a U-Net part, which synthesizes input information, and a Convnet part, which maps the synthesized data into a single-site wind prediction. We compare our approach with advanced Convnets, a fully connected neural network, and univariate models. We use time series from the Climate Forecast System Reanalysis as datasets and select temperature and u- and v-components of wind as explanatory variables. The proposed models are evaluated at multiple locations (totaling 181 target series) and multiple forecasting horizons. The results indicate that our proposal is promising for spatio-temporal wind speed prediction, with results that show competitive performance on both time horizons for all datasets.  相似文献   

11.
Analysis, model selection and forecasting in univariate time series models can be routinely carried out for models in which the model order is relatively small. Under an ARMA assumption, classical estimation, model selection and forecasting can be routinely implemented with the Box–Jenkins time domain representation. However, this approach becomes at best prohibitive and at worst impossible when the model order is high. In particular, the standard assumption of stationarity imposes constraints on the parameter space that are increasingly complex. One solution within the pure AR domain is the latent root factorization in which the characteristic polynomial of the AR model is factorized in the complex domain, and where inference questions of interest and their solution are expressed in terms of the implied (reciprocal) complex roots; by allowing for unit roots, this factorization can identify any sustained periodic components. In this paper, as an alternative to identifying periodic behaviour, we concentrate on frequency domain inference and parameterize the spectrum in terms of the reciprocal roots, and, in addition, incorporate Gegenbauer components. We discuss a Bayesian solution to the various inference problems associated with model selection involving a Markov chain Monte Carlo (MCMC) analysis. One key development presented is a new approach to forecasting that utilizes a Metropolis step to obtain predictions in the time domain even though inference is being carried out in the frequency domain. This approach provides a more complete Bayesian solution to forecasting for ARMA models than the traditional approach that truncates the infinite AR representation, and extends naturally to Gegenbauer ARMA and fractionally differenced models.  相似文献   

12.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

13.
This paper provides consistent information criteria for the selection of forecasting models that use a subset of both the idiosyncratic and common factor components of a big dataset. This hybrid model approach has been explored by recent empirical studies to relax the strictness of pure factor‐augmented model approximations, but no formal model selection procedures have been developed. The main difference to previous factor‐augmented model selection procedures is that we must account for estimation error in the idiosyncratic component as well as the factors. Our main contribution is to show the conditions required for selection consistency of a class of information criteria that reflect this additional source of estimation error. We show that existing factor‐augmented model selection criteria are inconsistent in circumstances where N is of larger order than , where N and T are the cross‐section and time series dimensions of the dataset respectively, and that the standard Bayesian information criterion is inconsistent regardless of the relationship between N and T. We therefore propose a new set of information criteria that guarantee selection consistency in the presence of estimated idiosyncratic components. The properties of these new criteria are explored through a Monte Carlo simulation study. The paper concludes with an empirical application to long‐horizon exchange rate forecasting using a recently proposed model with country‐specific idiosyncratic components from a panel of global exchange rates.  相似文献   

14.
Retailers supply a wide range of stock keeping units (SKUs), which may differ for example in terms of demand quantity, demand frequency, demand regularity, and demand variation. Given this diversity in demand patterns, it is unlikely that any single model for demand forecasting can yield the highest forecasting accuracy across all SKUs. To save costs through improved forecasting, there is thus a need to match any given demand pattern to its most appropriate prediction model. To this end, we propose an automated model selection framework for retail demand forecasting. Specifically, we consider model selection as a classification problem, where classes correspond to the different models available for forecasting. We first build labeled training data based on the models’ performances in previous demand periods with similar demand characteristics. For future data, we then automatically select the most promising model via classification based on the labeled training data. The performance is measured by economic profitability, taking into account asymmetric shortage and inventory costs. In an exploratory case study using data from an e-grocery retailer, we compare our approach to established benchmarks. We find promising results, but also that no single approach clearly outperforms its competitors, underlying the need for case-specific solutions.  相似文献   

15.
Interest in the use of “big data” when it comes to forecasting macroeconomic time series such as private consumption or unemployment has increased; however, applications to the forecasting of GDP remain rather rare. This paper incorporates Google search data into a bridge equation model, a version of which usually belongs to the suite of forecasting models at central banks. We show how such big data information can be integrated, with an emphasis on the appeal of the underlying model in this respect. As the decision as to which Google search terms should be added to which equation is crucial —- both for the forecasting performance itself and for the economic consistency of the implied relationships —- we compare different (ad-hoc, factor and shrinkage) approaches in terms of their pseudo real time out-of-sample forecast performances for GDP, various GDP components and monthly activity indicators. We find that sizeable gains can indeed be obtained by using Google search data, where the best-performing Google variable selection approach varies according to the target variable. Thus, assigning the selection methods flexibly to the targets leads to the most robust outcomes overall in all layers of the system.  相似文献   

16.
We use a dynamic modeling and selection approach for studying the informational content of various macroeconomic, monetary, and demographic fundamentals for forecasting house-price growth in the six largest countries of the European Monetary Union. The approach accounts for model uncertainty and model instability. We find superior performance compared to various alternative forecasting models. Plots of cumulative forecast errors visualize the superior performance of our approach, particularly after the recent financial crisis.  相似文献   

17.
In this paper we introduce a calibration procedure for validating of agent based models. Starting from the well-known financial model of (Brock and Hommes, 1998), we show how an appropriate calibration enables the model to describe price time series. We formulate the calibration problem as a nonlinear constrained optimization that can be solved numerically via a gradient-based method. The calibration results show that the simplest version of the Brock and Hommes model, with two trader types, fundamentalists and trend-followers, replicates nicely the price series of four different markets indices: the S&P 500, the Euro Stoxx 50, the Nikkei 225 and the CSI 300. We show how the parameter values of the calibrated model are important in interpreting the trader behavior in the different markets investigated. These parameters are then used for price forecasting. To further improve the forecasting, we modify our calibration approach by increasing the trader information set. Finally, we show how this new approach improves the model׳s ability to predict market prices.  相似文献   

18.
In this paper, we evaluate the role of a set of variables as leading indicators for Euro‐area inflation and GDP growth. Our leading indicators are taken from the variables in the European Central Bank's (ECB) Euro‐area‐wide model database, plus a set of similar variables for the US. We compare the forecasting performance of each indicator ex post with that of purely autoregressive models. We also analyse three different approaches to combining the information from several indicators. First, ex post, we discuss the use as indicators of the estimated factors from a dynamic factor model for all the indicators. Secondly, within an ex ante framework, an automated model selection procedure is applied to models with a large set of indicators. No future information is used, future values of the regressors are forecast, and the choice of the indicators is based on their past forecasting records. Finally, we consider the forecasting performance of groups of indicators and factors and methods of pooling the ex ante single‐indicator or factor‐based forecasts. Some sensitivity analyses are also undertaken for different forecasting horizons and weighting schemes of forecasts to assess the robustness of the results.  相似文献   

19.
Human dynamics and sociophysics build on statistical models that can shed light on and add to our understanding of social phenomena. We propose a generative model based on a stochastic differential equation that enables us to model the opinion polls leading up to the 2017 and 2019 UK general elections and to make predictions relating to the actual results of the elections. After a brief analysis of the time series of the poll results, we provide empirical evidence that the gamma distribution, which is often used in financial modelling, fits the marginal distribution of this time series. We demonstrate that the proposed poll-based forecasting model may improve upon predictions based solely on polls. The method uses the Euler–Maruyama method to simulate the time series, measuring the prediction error with the mean absolute error and the root mean square error, and as such could be used as part of a toolkit for forecasting elections.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号