首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a new way of selecting among model forms in automated exponential smoothing routines, consequently enhancing their predictive power. The procedure, here addressed as treating, operates by selectively subsetting the ensemble of competing models based on information from their prediction intervals. By the same token, we set forth a pruning strategy to improve the accuracy of both point forecasts and prediction intervals in forecast combination methods. The proposed approaches are respectively applied to automated exponential smoothing routines and Bagging algorithms, to demonstrate their potential. An empirical experiment is conducted on a wide range of series from the M-Competitions. The results attest that the proposed approaches are simple, without requiring much additional computational cost, but capable of substantially improving forecasting accuracy for both point forecasts and prediction intervals, outperforming important benchmarks and recently developed forecast combination methods.  相似文献   

2.
This paper reviews a spreadsheet-based forecasting approach which a process industry manufacturer developed and implemented to link annual corporate forecasts with its manufacturing/distribution operations. First, we consider how this forecasting system supports overall production planning and why it must be compatible with corporate forecasts. We then review the results of substantial testing of variations on the Winters three-parameter exponential smoothing model on 28 actual product family time series. In particular, we evaluate whether the use of damping parameters improves forecast accuracy. The paper concludes that a Winters four-parameter model (i.e. the standard Winters three-parameter model augmented by a fourth parameter to damp the trend) provides the most accurate forecasts of the models evaluated. Our application confirms the fact that there are situations where the use of damped trend parameters in short-run exponential smoothing based forecasting models is beneficial.  相似文献   

3.
This paper proposes a new method for combining forecasts based on complete subset regressions. For a given set of potential predictor variables we combine forecasts from all possible linear regression models that keep the number of predictors fixed. We explore how the choice of model complexity, as measured by the number of included predictor variables, can be used to trade off the bias and variance of the forecast errors, generating a setup akin to the efficient frontier known from modern portfolio theory. In an application to predictability of stock returns, we find that combinations of subset regressions can produce more accurate forecasts than conventional approaches based on equal-weighted forecasts (which fail to account for the dimensionality of the underlying models), combinations of univariate forecasts, or forecasts generated by methods such as bagging, ridge regression or Bayesian Model Averaging.  相似文献   

4.
Combining exponential smoothing forecasts using Akaike weights   总被引:1,自引:0,他引:1  
Simple forecast combinations such as medians and trimmed or winsorized means are known to improve the accuracy of point forecasts, and Akaike’s Information Criterion (AIC) has given rise to so-called Akaike weights, which have been used successfully to combine statistical models for inference and prediction in specialist fields, e.g., ecology and medicine. We examine combining exponential smoothing point and interval forecasts using weights derived from AIC, small-sample-corrected AIC and BIC on the M1 and M3 Competition datasets. Weighted forecast combinations perform better than forecasts selected using information criteria, in terms of both point forecast accuracy and prediction interval coverage. Simple combinations and weighted combinations do not consistently outperform one another, while simple combinations sometimes perform worse than single forecasts selected by information criteria. We find a tendency for a longer history to be associated with a better prediction interval coverage.  相似文献   

5.
Combination methods have performed well in time series forecast competitions. This study proposes a simple but general methodology for combining time series forecast methods. Weights are calculated using a cross-validation scheme that assigns greater weights to methods with more accurate in-sample predictions. The methodology was used to combine forecasts from the Theta, exponential smoothing, and ARIMA models, and placed fifth in the M4 Competition for both point and interval forecasting.  相似文献   

6.
Exponential smoothing with a damped multiplicative trend   总被引:1,自引:0,他引:1  
Multiplicative trend exponential smoothing has received very little attention in the literature. It involves modelling the local slope by smoothing successive ratios of the local level, and this leads to a forecast function that is the product of level and growth rate. By contrast, the popular Holt method uses an additive trend formulation. It has been argued that more real series have multiplicative trends than additive. However, even if this is true, it seems likely that the more conservative forecast function of the Holt method will be more robust when applied in an automated way to a large batch of series with different types of trend. In view of the improvements in accuracy seen in dampening the Holt method, in this paper we investigate a new damped multiplicative trend approach. An empirical study, using the monthly time series from the M3-Competition, gave encouraging results for the new approach at a range of forecast horizons, when compared to the established exponential smoothing methods.  相似文献   

7.
Forecast combination is a well-established and well-tested approach for improving the forecasting accuracy. One beneficial strategy is to use constituent forecasts that have diverse information. In this paper we consider the idea of diversity being accomplished by using different time aggregations. For example, we could create a yearly time series from a monthly time series and produce forecasts for both, then combine the forecasts. These forecasts would each be tracking the dynamics of different time scales, and would therefore add diverse types of information. A comparison of several forecast combination methods, performed in the context of this setup, shows that this is indeed a beneficial strategy and generally provides a forecasting performance that is better than the performances of the individual forecasts that are combined.As a case study, we consider the problem of forecasting monthly tourism numbers for inbound tourism to Egypt. Specifically, we consider 33 individual source countries, as well as the aggregate. The novel combination strategy also produces a generally improved forecasting accuracy.  相似文献   

8.
This research investigates the cumulative multi-period forecast accuracy of a diverse set of potential forecasting models for basin water quality management. The models are characterized by their short-term (memory by delay or memory by feedback) and long-term (linear or nonlinear) memory structures. The experiments are conducted as a series of forecast cycles, with a rolling origin of a constant fit size. The models are recalibrated with each cycle, and out-of-sample forecasts are generated for a five-period forecast horizon. The results confirm that the JENN and GMNN neural network models are generally more accurate than competitors for cumulative multi-period basin water quality prediction. For example, the JENN and GMNN models reduce the cumulative five-period forecast errors by as much as 50%, relative to exponential smoothing and ARIMA models. These findings are significant in view of the increasing social and economic consequences of basin water quality management, and have the potential for extention to other scientific, medical, and business applications where multi-period predictions of nonlinear time series are critical.  相似文献   

9.
Identifying the most appropriate time series model to achieve a good forecasting accuracy is a challenging task. We propose a novel algorithm that aims to mitigate the importance of model selection, while increasing the accuracy. Multiple time series are constructed from the original time series, using temporal aggregation. These derivative series highlight different aspects of the original data, as temporal aggregation helps in strengthening or attenuating the signals of different time series components. In each series, the appropriate exponential smoothing method is fitted and its respective time series components are forecast. Subsequently, the time series components from each aggregation level are combined, then used to construct the final forecast. This approach achieves a better estimation of the different time series components, through temporal aggregation, and reduces the importance of model selection through forecast combination. An empirical evaluation of the proposed framework demonstrates significant improvements in forecasting accuracy, especially for long-term forecasts.  相似文献   

10.
To improve the predictability of crude oil futures market returns, this paper proposes a new combination approach based on principal component analysis (PCA). The PCA combination approach combines individual forecasts given by all PCA subset regression models that use all potential predictor subsets to construct PCA indexes. The proposed method can not only guard against over-fitting by employing the PCA technique but also reduce forecast variance due to extensive forecast combinations, thus benefiting from both the combination of information and the combination of forecasts. Showing impressive out-of-sample forecasting performance, the PCA combination approach outperforms a benchmark model and many related competing models. Furthermore, a mean–variance investor can realize sizeable utility gains by using the PCA combination forecasts relative to the competing forecasts from an asset allocation perspective.  相似文献   

11.
Volatility forecasts aim to measure future risk and they are key inputs for financial analysis. In this study, we forecast the realized variance as an observable measure of volatility for several major international stock market indices and accounted for the different predictive information present in jump, continuous, and option-implied variance components. We allowed for volatility spillovers in different stock markets by using a multivariate modeling approach. We used heterogeneous autoregressive (HAR)-type models to obtain the forecasts. Based an out-of-sample forecast study, we show that: (i) including option-implied variances in the HAR model substantially improves the forecast accuracy, (ii) lasso-based lag selection methods do not outperform the parsimonious day-week-month lag structure of the HAR model, and (iii) cross-market spillover effects embedded in the multivariate HAR model have long-term forecasting power.  相似文献   

12.
This paper develops a Bayesian vector autoregressive model (BVAR) for the leader of the Portuguese car market to forecast the market share. The model includes five marketing decision variables. The Bayesian prior is selected on the basis of the accuracy of the out-of-sample forecasts. We find that BVAR models generally produce more accurate forecasts. The out-of-sample accuracy of the BVAR forecasts is also compared with that of forecasts from an unrestricted VAR model and of benchmark forecasts produced from three univariate models. Additionally, competitive dynamics are revealed through variance decompositions and impulse response analyses.  相似文献   

13.
The object of this paper is to produce distributional forecasts of asset price volatility and its associated risk premia using a non-linear state space approach. Option and spot market information on the latent variance process is captured by using dual ‘model-free’ variance measures to define a bivariate observation equation in the state space model. The premium for variance diffusive risk is defined as linear in the latent variance (in the usual fashion) whilst the premium for variance jump risk is specified as a conditionally deterministic dynamic process, driven by a function of past measurements. The inferential approach adopted is Bayesian, implemented via a Markov chain Monte Carlo algorithm that caters for the multiple sources of non-linearity in the model and for the bivariate measure. The method is applied to spot and option price data on the S&P500 index from 1999 to 2008, with conclusions drawn about investors’ required compensation for variance risk during the recent financial turmoil. The accuracy of the probabilistic forecasts of the observable variance measures is demonstrated, and compared with that of forecasts yielded by alternative methods. To illustrate the benefits of the approach, it is used to produce forecasts of prices of derivatives on volatility itself. In addition, the posterior distribution is augmented by information on daily returns to produce value at risk predictions. Linking the variance risk premia to the risk aversion parameter in a representative agent model, probabilistic forecasts of (approximate) relative risk aversion are also produced.  相似文献   

14.
It is a common practice to complement a forecasting method such as simple exponential smoothing with a monitoring scheme to detect those situations where forecasts have failed to adapt to structural change. It will be suggested in this paper that the equations for simple exponential smoothing can be augmented by a common monitoring statistic to provide a method that automatically adapts to structural change without human intervention. The resulting method, which turns out to be a restricted form of damped trend corrected exponential smoothing, is compared with related methods on the annual data from the M3 competition. It is shown to be better than simple exponential smoothing and more consistent than traditional damped trend exponential smoothing.  相似文献   

15.
In seeking an efficient combination of forecasts which minimises the forecast error variance, many methods have been suggested. Through analysis, simulation and case studies, this paper seeks to develop insights into the statistical circumstances which influence the relative accuracy of six of these methods. The six methods chosen have all been advocated in various publications and consist of ‘equal weighting’ (i.e., pooled average), ‘optimal’ (i.e., error variance minimising), ‘optimal with independence assumption’ (i.e., error variance minimising assuming zero correlation between individual forecast errors) and three variations on the formulation of a Bayesian combination based upon posterior probabilities. The statistical circumstances reflected varying conditions of relative forecast errors, error correlations and outliers.  相似文献   

16.
Short-term forecasting of crime   总被引:2,自引:0,他引:2  
The major question investigated is whether it is possible to accurately forecast selected crimes 1 month ahead in small areas, such as police precincts. In a case study of Pittsburgh, PA, we contrast the forecast accuracy of univariate time series models with naïve methods commonly used by police. A major result, expected for the small-scale data of this problem, is that average crime count by precinct is the major determinant of forecast accuracy. A fixed-effects regression model of absolute percent forecast error shows that such counts need to be on the order of 30 or more to achieve accuracy of 20% absolute forecast error or less. A second major result is that practically any model-based forecasting approach is vastly more accurate than current police practices. Holt exponential smoothing with monthly seasonality estimated using city-wide data is the most accurate forecast model for precinct-level crime series.  相似文献   

17.
It has been documented that investments in Research and Development (R&D) are associated with increased errors and inaccuracy in earnings forecasts made by financial analysts. These deficiencies have been generally attributed to information complexity and the uncertainty of the future benefits of R&D. This paper examines whether the capitalization of development costs can reduce analyst uncertainty about the future economic outcome of R&D investments, provide outsiders with a better matching of future R&D‐related revenues and costs, and therefore promote accuracy in analyst forecasts. UK data is used, because accounting rules in the United Kingdom permitted firms to conditionally capitalize development costs even before the introduction of the International Financial Reporting Standards. The choice to expense R&D rather than conditionally capitalize development costs is found to relate positively to signed analyst forecast errors. This finding is robust to controlling for the influence of other factors that may affect errors, as well as for the influence of R&D investments on forecast errors. The decision to capitalize versus expense is not observed to have a significant influence on analyst forecast revisions. The findings are interpreted as evidence that the choice to capitalize as opposed to expense may help to reduce deficiencies in analyst forecasts; hence, is informative for users of financial statements. Increased informativeness is expected to have repercussions for the effectiveness with which analysts produce earnings forecasts, and, as a result, market efficiency.  相似文献   

18.
We evaluate the performances of various methods for forecasting tourism data. The data used include 366 monthly series, 427 quarterly series and 518 annual series, all supplied to us by either tourism bodies or academics who had used them in previous tourism forecasting studies. The forecasting methods implemented in the competition are univariate and multivariate time series approaches, and econometric models. This forecasting competition differs from previous competitions in several ways: (i) we concentrate on tourism data only; (ii) we include approaches with explanatory variables; (iii) we evaluate the forecast interval coverage as well as the point forecast accuracy; (iv) we observe the effect of temporal aggregation on the forecasting accuracy; and (v) we consider the mean absolute scaled error as an alternative forecasting accuracy measure. We find that pure time series approaches provide more accurate forecasts for tourism data than models with explanatory variables. For seasonal data we implement three fully automated pure time series algorithms that generate accurate point forecasts, and two of these also produce forecast coverage probabilities which are satisfactorily close to the nominal rates. For annual data we find that Naïve forecasts are hard to beat.  相似文献   

19.
蒋惠园  张安顺 《物流技术》2020,(2):44-47,140
为使港口集装箱吞吐量预测的误差更小,精度更高,提出运用弹性系数法、灰色模型法、三次指数平滑法的组合预测模型,预测了武汉港未来特征年的集装箱吞吐量,研究结果表明,组合模型相比单一预测方法能够降低误差、提高精度,预测结果更加理想。  相似文献   

20.
This paper presents empirical evidence on how judgmental adjustments affect the accuracy of macroeconomic density forecasts. Judgment is defined as the difference between professional forecasters’ densities and the forecast densities from statistical models. Using entropic tilting, we evaluate whether judgments about the mean, variance and skew improve the accuracy of density forecasts for UK output growth and inflation. We find that not all judgmental adjustments help. Judgments about point forecasts tend to improve density forecast accuracy at short horizons and at times of heightened macroeconomic uncertainty. Judgments about the variance hinder at short horizons, but can improve tail risk forecasts at longer horizons. Judgments about skew in general take value away, with gains seen only for longer horizon output growth forecasts when statistical models took longer to learn that downside risks had reduced with the end of the Great Recession. Overall, density forecasts from statistical models prove hard to beat.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号