首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Forecasting monthly and quarterly time series using STL decomposition   总被引:1,自引:0,他引:1  
This paper is a re-examination of the benefits and limitations of decomposition and combination techniques in the area of forecasting, and also a contribution to the field, offering a new forecasting method. The new method is based on the disaggregation of time series components through the STL decomposition procedure, the extrapolation of linear combinations of the disaggregated sub-series, and the reaggregation of the extrapolations to obtain estimates for the global series. Applying the forecasting method to data from the NN3 and M1 Competition series, the results suggest that it can perform well relative to four other standard statistical techniques from the literature, namely the ARIMA, Theta, Holt-Winters’ and Holt’s Damped Trend methods. The relative advantages of the new method are then investigated further relative to a simple combination of the four statistical methods and a Classical Decomposition forecasting method. The strength of the method lies in its ability to predict long lead times with relatively high levels of accuracy, and to perform consistently well for a wide range of time series, irrespective of the characteristics, underlying structure and level of noise of the data.  相似文献   

2.
This paper evaluates the performances of prediction intervals generated from alternative time series models, in the context of tourism forecasting. The forecasting methods considered include the autoregressive (AR) model, the AR model using the bias-corrected bootstrap, seasonal ARIMA models, innovations state space models for exponential smoothing, and Harvey’s structural time series models. We use thirteen monthly time series for the number of tourist arrivals to Hong Kong and Australia. The mean coverage rates and widths of the alternative prediction intervals are evaluated in an empirical setting. It is found that all models produce satisfactory prediction intervals, except for the autoregressive model. In particular, those based on the bias-corrected bootstrap perform best in general, providing tight intervals with accurate coverage rates, especially when the forecast horizon is long.  相似文献   

3.
A new method for forecasting the trend of time series, based on mixture of MLP experts, is presented. In this paper, three neural network combining methods and an Adaptive Network-Based Fuzzy Inference System (ANFIS) are applied to trend forecasting in the Tehran stock exchange. There are two experiments in this study. In experiment I, the time series data are the Kharg petrochemical company’s daily closing prices on the Tehran stock exchange. In this case study, which considers different schemes for forecasting the trend of the time series, the recognition rates are 75.97%, 77.13% and 81.64% for stacked generalization, modified stacked generalization and ANFIS, respectively. Using the mixture of MLP experts (ME) scheme, the recognition rate is strongly increased to 86.35%. A gain and loss analysis is also used, showing the relative forecasting success of the ME method with and without rejection criteria, compared to a simple buy and hold approach. In experiment II, the time series data are the daily closing prices of 37 companies on the Tehran stock exchange. This experiment is conducted to verify the results of experiment I and to show the efficiency of the ME method compared to stacked generalization, modified stacked generalization and ANFIS.  相似文献   

4.
In this study we analyze existing and improved methods for forecasting incoming calls to telemarketing centers for the purposes of planning and budgeting. We analyze the use of additive and multiplicative versions of Holt–Winters (HW) exponentially weighted moving average models and compare it to Box–Jenkins (ARIMA) modeling with intervention analysis. We determine the forecasting accuracy of HW and ARIMA models for samples of telemarketing data. Although there is much evidence in recent literature that “simple models” such as Holt–Winters perform as well as or better than more complex models, we find that ARIMA models with intervention analysis perform better for the time series studied.  相似文献   

5.
This brief note describes two of the forecasting methods used in the M3 Competition, Robust Trend and ARARMA. The origins of these methods are very different. Robust Trend was introduced to model the special features of some telecommunications time series. It was subsequently found to be competitive with Holt’s linear model for the more varied set of time series used in the M1 Competition. The ARARMA methodology was proposed by Parzen as a general time series modelling procedure, and can be thought of as an alternative to the ARIMA methodology of Box and Jenkins. This method was used in the M1 Competition and achieved the lowest mean absolute percentage error for longer forecasting horizons. These methods will be described in more detail and some comments on their use in the M3 Competition conclude this note.  相似文献   

6.
Histogram time series (HTS) and interval time series (ITS) are examples of symbolic data sets. Though there have been methodological developments in a cross-sectional environment, they have been scarce in a time series setting. Arroyo, González-Rivera, and Maté (2011) analyze various forecasting methods for HTS and ITS, adapting smoothing filters and nonparametric algorithms such as the k-NN. Though these methods are very flexible, they may not be the true underlying data generating process (DGP). We present the first step in the search for a DGP by focusing on the autocorrelation functions (ACFs) of HTS and ITS. We analyze the ACF of the daily histogram of 5-minute intradaily returns to the S&P500 index in 2007 and 2008. There are clusters of high/low activity that generate a strong, positive, and persistent autocorrelation, pointing towards some autoregressive process for HTS. Though smoothing and k-NN may not be the true DGPs, we find that they are very good approximations because they are able to capture almost all of the original autocorrelation. However, there seems to be some structure left in the data that will require new modelling techniques. As a byproduct, we also analyze the [90,100%] quantile interval. By using all of the information contained in the histogram, we find that there are advantages in the estimation and prediction of a specific interval.  相似文献   

7.
We approximate probabilistic forecasts for interval-valued time series by offering alternative approaches. After fitting a possibly non-Gaussian bivariate vector autoregression (VAR) model to the center/log-range system, we transform prediction regions (analytical and bootstrap) for this system into regions for center/range and upper/lower bounds systems. Monte Carlo simulations show that bootstrap methods are preferred according to several new metrics. For daily S&P 500 low/high returns, we build joint conditional prediction regions of the return level and volatility. We illustrate the usefulness of obtaining bootstrap forecasts regions for low/high returns by developing a trading strategy and showing its profitability when compared to using point forecasts.  相似文献   

8.
We assess the marginal predictive content of a large international dataset for forecasting GDP in New Zealand, an archetypal small open economy. We apply “data-rich” factor and shrinkage methods to efficiently handle hundreds of predictor series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. We find that exploiting a large international dataset can improve forecasts relative to data-rich approaches based on a large national dataset only, and also relative to more traditional approaches based on small datasets. This is in spite of New Zealand’s business and consumer confidence and expectations data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from including international predictors are at longer forecast horizons. The forecasting performance achievable with the data-rich methods differs widely, with shrinkage methods and partial least squares performing best in handling the international data.  相似文献   

9.
This paper presents the Bayesian analysis of a general multivariate exponential smoothing model that allows us to forecast time series jointly, subject to correlated random disturbances. The general multivariate model, which can be formulated as a seemingly unrelated regression model, includes the previously studied homogeneous multivariate Holt-Winters’ model as a special case when all of the univariate series share a common structure. MCMC simulation techniques are required in order to approach the non-analytically tractable posterior distribution of the model parameters. The predictive distribution is then estimated using Monte Carlo integration. A Bayesian model selection criterion is introduced into the forecasting scheme for selecting the most adequate multivariate model for describing the behaviour of the time series under study. The forecasting performance of this procedure is tested using some real examples.  相似文献   

10.
We propose a new way of selecting among model forms in automated exponential smoothing routines, consequently enhancing their predictive power. The procedure, here addressed as treating, operates by selectively subsetting the ensemble of competing models based on information from their prediction intervals. By the same token, we set forth a pruning strategy to improve the accuracy of both point forecasts and prediction intervals in forecast combination methods. The proposed approaches are respectively applied to automated exponential smoothing routines and Bagging algorithms, to demonstrate their potential. An empirical experiment is conducted on a wide range of series from the M-Competitions. The results attest that the proposed approaches are simple, without requiring much additional computational cost, but capable of substantially improving forecasting accuracy for both point forecasts and prediction intervals, outperforming important benchmarks and recently developed forecast combination methods.  相似文献   

11.
The goal of statistical scale space analysis is to extract scale‐dependent features from noisy data. The data could be for example an observed time series or digital image in which case features in either different temporal or spatial scales would be sought. Since the 1990s, a number of statistical approaches to scale space analysis have been developed, most of them using smoothing to capture scales in the data, but other interpretations of scale have also been proposed. We review the various statistical scale space methods proposed and mention some of their applications.  相似文献   

12.
We develop an iterative and efficient information-theoretic estimator for forecasting interval-valued data, and use our estimator to forecast the SP500 returns up to five days ahead using moving windows. Our forecasts are based on 13 years of data. We show that our estimator is superior to its competitors under all of the common criteria that are used to evaluate forecasts of interval data. Our approach differs from other methods that are used to forecast interval data in two major ways. First, rather than applying the more traditional methods that use only certain moments of the intervals in the estimation process, our estimator uses the complete sample information. Second, our method simultaneously selects the model (or models) and infers the model’s parameters. It is an iterative approach that imposes minimal structure and statistical assumptions.  相似文献   

13.
Forecasting competitions have been a major driver not only of improvements in forecasting methods’ performances, but also of the development of new forecasting approaches. However, despite the tremendous value and impact of these competitions, they do suffer from the limitation that performances are measured only in terms of the forecast accuracy and bias, ignoring utility metrics. Using the monthly industry series of the M3 competition, we empirically explore the inventory performances of various widely used forecasting techniques, including exponential smoothing, ARIMA models, the Theta method, and approaches based on multiple temporal aggregation. We employ a rolling simulation approach and analyse the results for the order-up-to policy under various lead times. We find that the methods that are based on combinations result in superior inventory performances, while the Naïve, Holt, and Holt-Winters methods perform poorly.  相似文献   

14.
This paper presents and evaluates alternative methods for multi-step forecasting using univariate and multivariate functional coefficient autoregressive (FCAR) models. The methods include a simple “plug-in” approach, a bootstrap-based approach, and a multi-stage smoothing approach, where the functional coefficients are updated at each step to incorporate information from the time series captured in the previous predictions. The three methods are applied to a series of U.S. GNP and unemployment data to compare performance in practice. We find that the bootstrap-based approach out-performs the other two methods for nonlinear prediction, and that little forecast accuracy is sacrificed using any of the methods if the underlying process is actually linear.  相似文献   

15.
Probabilistic forecasting, i.e., estimating a time series’ future probability distribution given its past, is a key enabler for optimizing business processes. In retail businesses, for example, probabilistic demand forecasts are crucial for having the right inventory available at the right time and in the right place. This paper proposes DeepAR, a methodology for producing accurate probabilistic forecasts, based on training an autoregressive recurrent neural network model on a large number of related time series. We demonstrate how the application of deep learning techniques to forecasting can overcome many of the challenges that are faced by widely-used classical approaches to the problem. By means of extensive empirical evaluations on several real-world forecasting datasets, we show that our methodology produces more accurate forecasts than other state-of-the-art methods, while requiring minimal manual work.  相似文献   

16.
Combination methods have performed well in time series forecast competitions. This study proposes a simple but general methodology for combining time series forecast methods. Weights are calculated using a cross-validation scheme that assigns greater weights to methods with more accurate in-sample predictions. The methodology was used to combine forecasts from the Theta, exponential smoothing, and ARIMA models, and placed fifth in the M4 Competition for both point and interval forecasting.  相似文献   

17.
This discussion reflects on the results of the M4 forecasting competition, and in particular, the impact of machine learning (ML) methods. Unlike the M3, which included only one ML method (an automatic artificial neural network that performed poorly), M4’s 49 participants included eight that used either pure ML approaches, or ML in conjunction with statistical methods. The six pure (or combination of pure) ML methods again fared poorly, with all of them falling below the Comb benchmark that combined three simple time series methods. However, utilizing ML either in combination with statistical methods (and for selecting weightings) or in a hybrid model with exponential smoothing not only exceeded the benchmark, but performed at the top. While these promising results by no means prove ML to be a panacea, they do challenge the notion that complex methods do not add value to the forecasting process.  相似文献   

18.
This paper is concerned with the forecasting of probability density functions. Density functions are nonnegative and have a constrained integral, and thus do not constitute a vector space. The implementation of established functional time series forecasting methods for such nonlinear data is therefore problematic. Two new methods are developed and compared to two existing methods. The comparison is based on the densities derived from cross-sectional and intraday returns. For such data, one of our new approaches is shown to dominate the existing methods, while the other is comparable to one of the existing approaches.  相似文献   

19.
We assess the performances of alternative procedures for forecasting the daily volatility of the euro’s bilateral exchange rates using 15 min data. We use realized volatility and traditional time series volatility models. Our results indicate that using high-frequency data and considering their long memory dimension enhances the performance of volatility forecasts significantly. We find that the intraday FIGARCH model and the ARFIMA model outperform other traditional models for all exchange rate series.  相似文献   

20.
W.F. Sheppard has been much overlooked in the history of statistics although his work produced significant contributions. He developed a polynomial smoothing method and corrections of moment estimates for grouped data as well as extensive normal probability tables that have been widely used since the 20th century. Sheppard presented his smoothing method for actuaries in a series of publications during the early 20th century. Population data consist of irregularities, and some adjustment or smoothing of the data is often necessary. Simple techniques, such as Spencer's summation formulae involving arithmetic operations and moving averages, were commonly practised by actuaries to smooth out equally spaced data. Sheppard's method, however, is a polynomial smoothing method based on central differences. We will show how Sheppard's smoothing method was a significant milestone in the development of smoothing techniques and a precursor to local polynomial regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号