首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Exponential smoothing with a damped multiplicative trend   总被引:1,自引:0,他引:1  
Multiplicative trend exponential smoothing has received very little attention in the literature. It involves modelling the local slope by smoothing successive ratios of the local level, and this leads to a forecast function that is the product of level and growth rate. By contrast, the popular Holt method uses an additive trend formulation. It has been argued that more real series have multiplicative trends than additive. However, even if this is true, it seems likely that the more conservative forecast function of the Holt method will be more robust when applied in an automated way to a large batch of series with different types of trend. In view of the improvements in accuracy seen in dampening the Holt method, in this paper we investigate a new damped multiplicative trend approach. An empirical study, using the monthly time series from the M3-Competition, gave encouraging results for the new approach at a range of forecast horizons, when compared to the established exponential smoothing methods.  相似文献   

2.
This paper reviews a spreadsheet-based forecasting approach which a process industry manufacturer developed and implemented to link annual corporate forecasts with its manufacturing/distribution operations. First, we consider how this forecasting system supports overall production planning and why it must be compatible with corporate forecasts. We then review the results of substantial testing of variations on the Winters three-parameter exponential smoothing model on 28 actual product family time series. In particular, we evaluate whether the use of damping parameters improves forecast accuracy. The paper concludes that a Winters four-parameter model (i.e. the standard Winters three-parameter model augmented by a fourth parameter to damp the trend) provides the most accurate forecasts of the models evaluated. Our application confirms the fact that there are situations where the use of damped trend parameters in short-run exponential smoothing based forecasting models is beneficial.  相似文献   

3.
This article proposes a new technique for estimating trend and multiplicative seasonality in time series data. The technique is computationally quite straightforward and gives better forecasts (in a sense described below) than other commonly used methods. Like many other methods, the one presented here is basically a decomposition technique, that is, it attempts to isolate and estimate the several subcomponents in the time series. It draws primarily on regression analysis for its power and has some of the computational advantages of exponential smoothing. In particular, old estimates of base, trend, and seasonality may be smoothed with new data as they occur. The basic technique was developed originally as a way to generate initial parameter values for a Winters exponential smoothing model [4], but it proved to be a useful forecasting method in itself.The objective in all decomposition methods is to separate somehow the effects of trend and seasonality in the data, so that the two may be estimated independently. When seasonality is modeled with an additive form (Datum = Base + Trend + Seasonal Factor), techniques such as regression analysis with dummy variables or ratio-to-moving-average techniques accomplish this task well. It is more common, however, to model seasonality as a multiplicative form (as in the Winters model, for example, where Datum = [Base + Trend] * Seasonal Factor). In this case, it can be shown that neither of the techniques above achieves a proper separation of the trend and seasonal effects, and in some instances may give highly misleading results. The technique described in this article attempts to deal properly with multiplicative seasonality, while remaining computationally tractable.The technique is built on a set of simple regression models, one for each period in the seasonal cycle. These models are used to estimate individual seasonal effects and then pooled to estimate the base and trend. As new data occur, they are smoothed into the least-squares formulas with computations that are quite similar to those used in ordinary exponential smoothing. Thus, the full least-squares computations are done only once, when the forecasting process is first initiated. Although the technique is demonstrated here under the assumption that trend is linear, the trend may, in fact, assume any form for which the curve-fitting tools are available (exponential, polynomial, etc.).The method has proved to be easy to program and execute, and computational experience has been quite favorable. It is faster than the RTMA method or regression with dummy variables (which requires a multiple regression routine), and it is competitive with, although a bit slower than, ordinary triple exponential smoothing.  相似文献   

4.
It is common practice to evaluate fixed-event forecast revisions in macroeconomics by regressing current forecast revisions on one-period lagged forecast revisions. Under weak-form (forecast) efficiency, the correlation between the current and one-period lagged revisions should be zero. The empirical findings in the literature suggest that this null hypothesis of zero correlation is rejected frequently, and the correlation can be either positive (which is widely interpreted in the literature as “smoothing”) or negative (which is widely interpreted as “over-reacting”). We propose a methodology for interpreting such non-zero correlations in a straightforward and clear manner. Our approach is based on the assumption that numerical forecasts can be decomposed into both an econometric model and random expert intuition. We show that the interpretation of the sign of the correlation between the current and one-period lagged revisions depends on the process governing intuition, and the current and lagged correlations between intuition and news (or shocks to the numerical forecasts). It follows that the estimated non-zero correlation cannot be given a direct interpretation in terms of either smoothing or over-reaction.  相似文献   

5.
Combining exponential smoothing forecasts using Akaike weights   总被引:1,自引:0,他引:1  
Simple forecast combinations such as medians and trimmed or winsorized means are known to improve the accuracy of point forecasts, and Akaike’s Information Criterion (AIC) has given rise to so-called Akaike weights, which have been used successfully to combine statistical models for inference and prediction in specialist fields, e.g., ecology and medicine. We examine combining exponential smoothing point and interval forecasts using weights derived from AIC, small-sample-corrected AIC and BIC on the M1 and M3 Competition datasets. Weighted forecast combinations perform better than forecasts selected using information criteria, in terms of both point forecast accuracy and prediction interval coverage. Simple combinations and weighted combinations do not consistently outperform one another, while simple combinations sometimes perform worse than single forecasts selected by information criteria. We find a tendency for a longer history to be associated with a better prediction interval coverage.  相似文献   

6.
This paper presents the winning submission of the M4 forecasting competition. The submission utilizes a dynamic computational graph neural network system that enables a standard exponential smoothing model to be mixed with advanced long short term memory networks into a common framework. The result is a hybrid and hierarchical forecasting method.  相似文献   

7.
Exponential smoothing is commonly used in automatic forecasting systems. However, when only a small amount of historical data is relevant to future demands, the ad hoc startup methods used in exponential smoothing produce unexpected results. With large data sets, an exponentially smoothed average implicitly weights the data in a declining manner, similar to discounting. This pattern is important in that it minimizes a measure of forecast error. However, restarting with limited data distorts the weighting pattern. A new technique, termed the declining alpha method, is presented and shown to preserve the exponential weight pattern. The key is a formula that changes the smoothing constant each period. Examples are given to illustrate the method and contrast it to other startup techniques.  相似文献   

8.
曾中文 《物流科技》2007,30(8):44-46
本文通过对物流需求预测的时间序列预测方法的组成成分、基本结构进行分析,利用编制计算机程序来建立时间序列需求量预测的简单指数平滑法、自适应指数平滑法和HOLT指数平滑法三种预测模型,为配送中心的库存控制提供支持。  相似文献   

9.
广西北部湾港口物流需求预测及发展模式研究   总被引:1,自引:0,他引:1  
广西北部湾港作为一个组合港,港口物流需求量日益扩大,为掌握其港口物流发展态势,拟采用三次指数平滑法构建广西北部湾港口物流需求预测模型,基于2000~2009十年港口吞吐量,对2010~2012年港口吞吐量作出预测,以及对港口物流科学发展提出对策建议。  相似文献   

10.
The forecast of the real estate market is an important part of studying the Chinese economic market. Most existing methods have strict requirements on input variables and are complex in parameter estimation. To obtain better prediction results, a modified Holt's exponential smoothing (MHES) method was proposed to predict the housing price by using historical data. Unlike the traditional exponential smoothing models, MHES sets different weights on historical data and the smoothing parameters depend on the sample size. Meanwhile, the proposed MHES incorporates the whale optimization algorithm (WOA) to obtain the optimal parameters. Housing price data from Kunming, Changchun, Xuzhou and Handan were used to test the performance of the model. The housing prices results of four cities indicate that the proposed method has a smaller prediction error and shorter computation time than that of other traditional models. Therefore, WOA-MHES can be applied efficiently to housing price forecasting and can be a reliable tool for market investors and policy makers.  相似文献   

11.
In this paper sequential procedures are proposed for jointly monitoring all elements of the covariance matrix at lag 0 of a multivariate time series. All control charts are based on exponential smoothing. As a measure of the distance between the target values and the actual values the Mahalanobis distance is used. It is distinguished between residual control schemes and modified control schemes. Several properties of these charts are proved assuming the target process to be a stationary Gaussian process. Within an extensive Monte Carlo study all procedures are compared with each other. As a measure of the performance of a control chart the average run length is used. An empirical example about Eastern European stock markets illustrates how the autocovariance and the cross-covariance structure of financial assets can be monitored by these methods.  相似文献   

12.
This paper constructs hybrid forecasts that combine forecasts from vector autoregressive (VAR) model(s) with both short- and long-term expectations from surveys. Specifically, we use the relative entropy to tilt one-step-ahead and long-horizon VAR forecasts to match the nowcasts and long-horizon forecasts from the Survey of Professional Forecasters. We consider a variety of VAR models, ranging from simple fixed-parameter to time-varying parameters. The results across models indicate meaningful gains in multi-horizon forecast accuracy relative to model forecasts that do not incorporate long-term survey conditions. Accuracy improvements are achieved for a range of variables, including those that are not tilted directly but are affected through spillover effects from tilted variables. The accuracy gains for hybrid inflation forecasts from simple VARs are substantial, statistically significant, and competitive to time-varying VARs, univariate benchmarks, and survey forecasts. We view our proposal as an indirect approach to accommodating structural change and moving end points.  相似文献   

13.
Identifying the most appropriate time series model to achieve a good forecasting accuracy is a challenging task. We propose a novel algorithm that aims to mitigate the importance of model selection, while increasing the accuracy. Multiple time series are constructed from the original time series, using temporal aggregation. These derivative series highlight different aspects of the original data, as temporal aggregation helps in strengthening or attenuating the signals of different time series components. In each series, the appropriate exponential smoothing method is fitted and its respective time series components are forecast. Subsequently, the time series components from each aggregation level are combined, then used to construct the final forecast. This approach achieves a better estimation of the different time series components, through temporal aggregation, and reduces the importance of model selection through forecast combination. An empirical evaluation of the proposed framework demonstrates significant improvements in forecasting accuracy, especially for long-term forecasts.  相似文献   

14.
This paper considers the problem of forecasting under continuous and discrete structural breaks and proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for one step ahead forecasts. Under continuous breaks, our approach largely recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for optimal weights in models with a single regressor, and asymptotically valid weights for models with more than one regressor. It is shown that in these cases the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain, a forecasting procedure based on robust optimal weights is proposed. The relative performance of our proposed approach is investigated using Monte Carlo experiments and an empirical application to forecasting real GDP using the yield curve across nine industrial economies.  相似文献   

15.
It has long been known that combination forecasting strategies produce superior out-of-sample forecasting performances. In the M4 forecasting competition, a very simple forecast combination strategy achieved third place on yearly time series. An analysis of the ensemble model and its component models suggests that the competitive accuracy comes from avoiding poor forecasts, rather than from beating the best individual models. Moreover, the simple ensemble model can be fitted very quickly, can easily scale horizontally with additional CPU cores or a cluster of computers, and can be implemented by users very quickly and easily. This approach might be of particular interest to users who need accurate yearly forecasts without being able to spend significant time, resources, or expertise on tuning models. Users of the R statistical programming language can access this modeling approach using the “forecastHybrid” package.  相似文献   

16.
Interest in density forecasts (as opposed to solely modeling the conditional mean) arises from the possibility of dynamics in higher moments of a time series, as well as in forecasting the probability of future events in some applications. By combining the idea of Markov bootstrapping with that of kernel density estimation, this paper presents a simple non-parametric method for estimating out-of-sample multi-step density forecasts. The paper also considers a host of evaluation tests for examining the dynamic misspecification of estimated density forecasts by targeting autocorrelation, heteroskedasticity and neglected non-linearity. These tests are useful, as a rejection of the tests gives insight into ways to improve a particular forecasting model. In an extensive Monte Carlo analysis involving a range of commonly used linear and non-linear time series processes, the non-parametric method is shown to work reasonably well across the simulated models for a suitable choice of the bandwidth (smoothing parameter). Furthermore, an application of the method to the U.S. Industrial Production series provides multi-step density forecasts that show no sign of dynamic misspecification.  相似文献   

17.
In Winters’ seasonal exponential smoothing methods, a time series is decomposed into: level, trend and seasonal components, that change over time. The seasonal factors are initialized so that their average is 0 in the additive version or 1 in the multiplicative version. Usually, only one seasonal factor is updated each period, and the average of the seasonal factors is no longer 0 or 1; the ‘seasonal factors’ no longer meet the usual meaning of seasonal factors. We provide an equivalent reformulation of previous equations for renormalizing the components in the additive version. This form of the renormalization equations is then adapted to new renormalization formulas for the multiplicative Winters’ method. For both the standard and renormalized equations we make a minor change to the seasonal equation. Predictions from our renormalized smoothing values are the same as for the original smoothed values. The formulas can be applied every period, or when required. However, we recommend renormalization every time period. We show in the multiplicative version that the level and trend should be adjusted along with the seasonal component.  相似文献   

18.
This paper describes the approach that we implemented for producing the point forecasts and prediction intervals for our M4-competition submission. The proposed simple combination of univariate models (SCUM) is a median combination of the point forecasts and prediction intervals of four models, namely exponential smoothing, complex exponential smoothing, automatic autoregressive integrated moving average and dynamic optimised theta. Our submission performed very well in the M4-competition, being ranked 6th for the point forecasts (with a small difference compared to the 2nd submission) and prediction intervals and 2nd and 3rd for the point forecasts of the weekly and quarterly data respectively.  相似文献   

19.
We use a broad-range set of inflation models and pseudo out-of-sample forecasts to assess their predictive ability among 14 emerging market economies (EMEs) at different horizons (1–12 quarters ahead) with quarterly data over the period 1980Q1-2016Q4. We find, in general, that a simple arithmetic average of the current and three previous observations (the RW-AO model) consistently outperforms its standard competitors—based on the root mean squared prediction error (RMSPE) and on the accuracy in predicting the direction of change. These include conventional models based on domestic factors, existing open-economy Phillips curve-based specifications, factor-augmented models, and time-varying parameter models. Often, the RMSPE and directional accuracy gains of the RW-AO model are shown to be statistically significant. Our results are robust to forecast combinations, intercept corrections, alternative transformations of the target variable, different lag structures, and additional tests of (conditional) predictability. We argue that the RW-AO model is successful among EMEs because it is a straightforward method to downweight later data, which is a useful strategy when there are unknown structural breaks and model misspecification.  相似文献   

20.
Some relations between smoothness of statistical functionals and locally uniform validity of Efron's bootstrap for the estimators generated by the functionals are studied. It is shown that the Fréchet differentiability easily implies the locally uniform bootstrap validity. In an important example of empirical median it is indicated that indeed stronger regularity conditions are necessary to make the bootstrap a very reliable tool. It is also shown that a smoothing method based on Huber's M-estimation applied to the median may lead to an improved simple bootstrap there.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号