首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A commonly used defining property of long memory time series is the power law decay of the autocovariance function. Some alternative methods of deriving this property are considered, working from the alternate definition in terms of a fractional pole in the spectrum at the origin. The methods considered involve the use of (i) Fourier transforms of generalized functions, (ii) asymptotic expansions of Fourier integrals with singularities, (iii) direct evaluation using hypergeometric function algebra, and (iv) conversion to a simple gamma integral. The paper is largely pedagogical but some novel methods and results involving complete asymptotic series representations are presented. The formulae are useful in many ways, including the calculation of long run variation matrices for multivariate time series with long memory and the econometric estimation of such models.  相似文献   

2.
We consider forecasting the term structure of interest rates with the assumption that factors driving the yield curve are stationary around a slowly time‐varying mean or ‘shifting endpoint’. The shifting endpoints are captured using either (i) time series methods (exponential smoothing) or (ii) long‐range survey forecasts of either interest rates or inflation and output growth, or (iii) exponentially smoothed realizations of these macro variables. Allowing for shifting endpoints in yield curve factors provides substantial and significant gains in out‐of‐sample predictive accuracy, relative to stationary and random walk benchmarks. Forecast improvements are largest for long‐maturity interest rates and for long‐horizon forecasts. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Forecasting monthly and quarterly time series using STL decomposition   总被引:1,自引:0,他引:1  
This paper is a re-examination of the benefits and limitations of decomposition and combination techniques in the area of forecasting, and also a contribution to the field, offering a new forecasting method. The new method is based on the disaggregation of time series components through the STL decomposition procedure, the extrapolation of linear combinations of the disaggregated sub-series, and the reaggregation of the extrapolations to obtain estimates for the global series. Applying the forecasting method to data from the NN3 and M1 Competition series, the results suggest that it can perform well relative to four other standard statistical techniques from the literature, namely the ARIMA, Theta, Holt-Winters’ and Holt’s Damped Trend methods. The relative advantages of the new method are then investigated further relative to a simple combination of the four statistical methods and a Classical Decomposition forecasting method. The strength of the method lies in its ability to predict long lead times with relatively high levels of accuracy, and to perform consistently well for a wide range of time series, irrespective of the characteristics, underlying structure and level of noise of the data.  相似文献   

4.
Control groups can provide counterfactual evidence for assessing the impact of an event or policy change on a target variable. We argue that fitting a multivariate time series model offers potential gains over a direct comparison between the target and a weighted average of controls. More importantly, it highlights the assumptions underlying methods such as difference in differences and synthetic control, suggesting ways to test these assumptions. Gains from simple and transparent time series models are analysed using examples from the literature, including the California smoking law of 1989 and German reunification. We argue that selecting controls using a time series strategy is preferable to existing data‐driven regression methods.  相似文献   

5.
张宏哲 《价值工程》2014,(20):320-321
本文通过采取步长和初始时间序列不同的两种情况,根据运用公式1计算的结果初步推断出动态数列直线趋势预测方法和参数的取值的规律,并对此规律进行数学证明和实例验证,并由此提出公式2和3。通过本文的论述可以得出,按照直线趋势预测法进行预测,预测值与取时间序列的第一个取值无关,也与时间序列间的步长大小无关,只要时间序列间的步长相等即可,预测值都是一样的,且预测值呈等差数列。  相似文献   

6.
This paper reports the results of the NN3 competition, which is a replication of the M3 competition with an extension of the competition towards neural network (NN) and computational intelligence (CI) methods, in order to assess what progress has been made in the 10 years since the M3 competition. Two masked subsets of the M3 monthly industry data, containing 111 and 11 empirical time series respectively, were chosen, controlling for multiple data conditions of time series length (short/long), data patterns (seasonal/non-seasonal) and forecasting horizons (short/medium/long). The relative forecasting accuracy was assessed using the metrics from the M3, together with later extensions of scaled measures, and non-parametric statistical tests. The NN3 competition attracted 59 submissions from NN, CI and statistics, making it the largest CI competition on time series data. Its main findings include: (a) only one NN outperformed the damped trend using the sMAPE, but more contenders outperformed the AutomatANN of the M3; (b) ensembles of CI approaches performed very well, better than combinations of statistical methods; (c) a novel, complex statistical method outperformed all statistical and CI benchmarks; and (d) for the most difficult subset of short and seasonal series, a methodology employing echo state neural networks outperformed all others. The NN3 results highlight the ability of NN to handle complex data, including short and seasonal time series, beyond prior expectations, and thus identify multiple avenues for future research.  相似文献   

7.
Using methods based on wavelets and aggregate series, long memory in the absolute daily returns, squared daily returns, and log squared daily returns of the S&P 500 Index are investigated. First, we estimate the long memory parameter in each series using a method based on the discrete wavelet transform. For each series, the variance method and the absolute value method based on aggregate series are then employed to investigate long memory. Our findings suggest that these methods provide evidence of long memory in the volatility of the S&P 500 Index. Our esteemed colleague, Robert DiSario, passed away on December 31, 2005.  相似文献   

8.
Probabilistic time series forecasting is crucial in many application domains, such as retail, ecommerce, finance, and biology. With the increasing availability of large volumes of data, a number of neural architectures have been proposed for this problem. In particular, Transformer-based methods achieve state-of-the-art performance on real-world benchmarks. However, these methods require a large number of parameters to be learned, which imposes high memory requirements on the computational resources for training such models. To address this problem, we introduce a novel bidirectional temporal convolutional network that requires an order of magnitude fewer parameters than a common Transformer-based approach. Our model combines two temporal convolutional networks: the first network encodes future covariates of the time series, whereas the second network encodes past observations and covariates. We jointly estimate the parameters of an output distribution via these two networks. Experiments on four real-world datasets show that our method performs on par with four state-of-the-art probabilistic forecasting methods, including a Transformer-based approach and WaveNet, on two point metrics (sMAPE and NRMSE) as well as on a set of range metrics (quantile loss percentiles) in the majority of cases. We also demonstrate that our method requires significantly fewer parameters than Transformer-based methods, which means that the model can be trained faster with significantly lower memory requirements, which as a consequence reduces the infrastructure cost for deploying these models.  相似文献   

9.
Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed in the past – without any prior information on how they interact with the target. Several deep learning methods have been proposed, but they are typically ‘black-box’ models that do not shed light on how they use the full range of inputs present in practical scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) – a novel attention-based architecture that combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, TFT uses recurrent layers for local processing and interpretable self-attention layers for long-term dependencies. TFT utilizes specialized components to select relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of scenarios. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and highlight three practical interpretability use cases of TFT.  相似文献   

10.
《Journal of econometrics》2002,106(2):243-269
This paper considers methods of deriving sufficient conditions for the central limit theorem and functional central limit theorem to hold in a broad class of time series processes, including nonlinear processes and semiparametric linear processes. The common thread linking these results is the concept of near-epoch dependence on a mixing process, since powerful limit results are available under this limited-dependence property. The particular case of near-epoch dependence on an independent process provides a convenient framework for dealing with a range of nonlinear cases, including the bilinear, GARCH, and threshold autoregressive models. It is shown in particular that even SETAR processes with a unit root regime have short memory, under the right conditions. A simulation approach is also demonstrated, applicable to cases that are analytically intractable. A new FCLT is given for semiparametric linear processes, where the forcing processes are of the NED-on-mixing type, under conditions that are evidently close to necessary.  相似文献   

11.
Recently, there has been considerable work on stochastic time-varying coefficient models as vehicles for modelling structural change in the macroeconomy with a focus on the estimation of the unobserved paths of random coefficient processes. The dominant estimation methods, in this context, are based on various filters, such as the Kalman filter, that are applicable when the models are cast in state space representations. This paper introduces a new class of autoregressive bounded processes that decompose a time series into a persistent random attractor, a time varying autoregressive component, and martingale difference errors. The paper examines, rigorously, alternative kernel based, nonparametric estimation approaches for such models and derives their basic properties. These estimators have long been studied in the context of deterministic structural change, but their use in the presence of stochastic time variation is novel. The proposed inference methods have desirable properties such as consistency and asymptotic normality and allow a tractable studentization. In extensive Monte Carlo and empirical studies, we find that the methods exhibit very good small sample properties and can shed light on important empirical issues such as the evolution of inflation persistence and the purchasing power parity (PPP) hypothesis.  相似文献   

12.
Stochastic demographic forecasting   总被引:1,自引:0,他引:1  
"This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary....Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented."  相似文献   

13.
We evaluate the performance of several volatility models in estimating one-day-ahead Value-at-Risk (VaR) of seven stock market indices using a number of distributional assumptions. Because all returns series exhibit volatility clustering and long range memory, we examine GARCH-type models including fractionary integrated models under normal, Student-t and skewed Student-t distributions. Consistent with the idea that the accuracy of VaR estimates is sensitive to the adequacy of the volatility model used, we find that AR (1)-FIAPARCH (1,d,1) model, under a skewed Student-t distribution, outperforms all the models that we have considered including widely used ones such as GARCH (1,1) or HYGARCH (1,d,1). The superior performance of the skewed Student-t FIAPARCH model holds for all stock market indices, and for both long and short trading positions. Our findings can be explained by the fact that the skewed Student-t FIAPARCH model can jointly accounts for the salient features of financial time series: fat tails, asymmetry, volatility clustering and long memory. In the same vein, because it fails to account for most of these stylized facts, the RiskMetrics model provides the least accurate VaR estimation. Our results corroborate the calls for the use of more realistic assumptions in financial modeling.  相似文献   

14.
EVENT STUDY METHODS AND EVIDENCE ON THEIR PERFORMANCE   总被引:8,自引:0,他引:8  
Abstract. The paper outlines widely used methods of estimating abnormal returns and testing their significance, highlights respects in which they differ conceptually, and reviews research comparing results they produce in various empirical contexts. Direct evidence on the performance of different methods is available from simulation experiments in which known levels of abnormal return are added. The market model is most commonly used to generate expected returns and no better alternative has yet been found despite the weak relationship between beta and actual returns. Choice of procedure for significance testing depends on the characteristics of the data. The evidence indicates that in many cases the best procedure is to standardise market model abnormal returns by their time series standard errors of regression and use the t -test. Alternatively a rank test appears to be at least as powerful. If errors are cross-correlated or increase in variance during the test period, other methods discussed should be used.  相似文献   

15.
A key requirement of repeated surveys conducted by national statistical institutes is the comparability of estimates over time, resulting in uninterrupted time series describing the evolution of finite population parameters. This is often an argument to keep survey processes unchanged as long as possible. It is nevertheless inevitable that a survey process will need to be redesigned from time to time, for example, to improve or update methods or implement more cost-effective data collection procedures. It is important to quantify the systematic effects or discontinuities of a new survey process on the estimates of a repeated survey to avoid a disturbance in the comparability of estimates over time. This paper reviews different statistical methods that can be used to measure discontinuities and manage the risk due to a survey process redesign.  相似文献   

16.
We develop a Bayesian median autoregressive (BayesMAR) model for time series forecasting. The proposed method utilizes time-varying quantile regression at the median, favorably inheriting the robustness of median regression in contrast to the widely used mean-based methods. Motivated by a working Laplace likelihood approach in Bayesian quantile regression, BayesMAR adopts a parametric model bearing the same structure as autoregressive models by altering the Gaussian error to Laplace, leading to a simple, robust, and interpretable modeling strategy for time series forecasting. We estimate model parameters by Markov chain Monte Carlo. Bayesian model averaging is used to account for model uncertainty, including the uncertainty in the autoregressive order, in addition to a Bayesian model selection approach. The proposed methods are illustrated using simulations and real data applications. An application to U.S. macroeconomic data forecasting shows that BayesMAR leads to favorable and often superior predictive performance compared to the selected mean-based alternatives under various loss functions that encompass both point and probabilistic forecasts. The proposed methods are generic and can be used to complement a rich class of methods that build on autoregressive models.  相似文献   

17.
This paper considers Bayesian regression with normal and double-exponential priors as forecasting methods based on large panels of time series. We show that, empirically, these forecasts are highly correlated with principal component forecasts and that they perform equally well for a wide range of prior choices. Moreover, we study conditions for consistency of the forecast based on Bayesian regression as the cross-section and the sample size become large. This analysis serves as a guide to establish a criterion for setting the amount of shrinkage in a large cross-section.  相似文献   

18.
在检验时间趋势之前,先确定在时间序列中是否存在单位根(unitroot),只有在单位根假设被拒绝后,才能用带趋势的稳定过程。单位根检验之所以引起广泛兴趣,是因为许多经济时间序列被变换为对数形式后都含有单位根。  相似文献   

19.
We propose an automated method for obtaining weighted forecast combinations using time series features. The proposed approach involves two phases. First, we use a collection of time series to train a meta-model for assigning weights to various possible forecasting methods with the goal of minimizing the average forecasting loss obtained from a weighted forecast combination. The inputs to the meta-model are features that are extracted from each series. Then, in the second phase, we forecast new series using a weighted forecast combination, where the weights are obtained from our previously trained meta-model. Our method outperforms a simple forecast combination, as well as all of the most popular individual methods in the time series forecasting literature. The approach achieved second position in the M4 competition.  相似文献   

20.
This paper evaluates the performances of prediction intervals generated from alternative time series models, in the context of tourism forecasting. The forecasting methods considered include the autoregressive (AR) model, the AR model using the bias-corrected bootstrap, seasonal ARIMA models, innovations state space models for exponential smoothing, and Harvey’s structural time series models. We use thirteen monthly time series for the number of tourist arrivals to Hong Kong and Australia. The mean coverage rates and widths of the alternative prediction intervals are evaluated in an empirical setting. It is found that all models produce satisfactory prediction intervals, except for the autoregressive model. In particular, those based on the bias-corrected bootstrap perform best in general, providing tight intervals with accurate coverage rates, especially when the forecast horizon is long.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号