共查询到12条相似文献,搜索用时 0 毫秒
1.
波动率预测:GARCH模型与隐含波动率 总被引:5,自引:0,他引:5
在预测未来波动率时,究竟是基于历史数据的时间序列模型还是基于期权价格的隐含波动率模型效率更高?本文对香港恒生指数期权市场所含信息的研究发现,在预测期限较短(一周)时,GARCH(1,1)模型所含信息较多,预测能力最强,但在预测较长期限(一个月)时,隐含波动率所含信息较多,预测能力较强。同时,期权市场交易越活跃,所反映的信息就越全面,隐含波动率的预测能力也就越强。 相似文献
2.
《International Journal of Forecasting》2020,36(2):723-737
Volatility forecasting is crucial for portfolio management, risk management, and pricing of derivative securities. Still, little is known about the accuracy of volatility forecasts and the horizon of volatility predictability. This paper aims to fill these gaps in the literature. We begin this paper by introducing the notions of spot and forward predicted volatilities and propose describing the term structure of volatility predictability by spot and forward forecast accuracy curves. Then, we perform a comprehensive study of the term structure of volatility predictability in stock and foreign exchange markets. Our results quantify the volatility forecast accuracy across horizons in two major markets and suggest that the horizon of volatility predictability is significantly longer than that reported in earlier studies. Nevertheless, the aforesaid horizon is observed to be much shorter than the longest maturity of traded derivative contracts. 相似文献
3.
Xiaodong Yan Hongni Wang Wei Wang Jinhan Xie Yanyan Ren Xinjun Wang 《International Journal of Forecasting》2021,37(3):1147-1155
This article considers ultrahigh-dimensional forecasting problems with survival response variables. We propose a two-step model averaging procedure for improving the forecasting accuracy of the true conditional mean of a survival response variable. The first step is to construct a class of candidate models, each with low-dimensional covariates. For this, a feature screening procedure is developed to separate the active and inactive predictors through a marginal Buckley–James index, and to group covariates with a similar index size together to form regression models with survival response variables. The proposed screening method can select active predictors under covariate-dependent censoring, and enjoys sure screening consistency under mild regularity conditions. The second step is to find the optimal model weights for averaging by adapting a delete-one cross-validation criterion, without the standard constraint that the weights sum to one. The theoretical results show that the delete-one cross-validation criterion achieves the lowest possible forecasting loss asymptotically. Numerical studies demonstrate the superior performance of the proposed variable screening and model averaging procedures over existing methods. 相似文献
4.
Georgios Chortareas Ying Jiang John. C. Nankervis 《International Journal of Forecasting》2011,27(4):1089
We assess the performances of alternative procedures for forecasting the daily volatility of the euro’s bilateral exchange rates using 15 min data. We use realized volatility and traditional time series volatility models. Our results indicate that using high-frequency data and considering their long memory dimension enhances the performance of volatility forecasts significantly. We find that the intraday FIGARCH model and the ARFIMA model outperform other traditional models for all exchange rate series. 相似文献
5.
We review the results of six forecasting competitions based on the online data science platform Kaggle, which have been largely overlooked by the forecasting community. In contrast to the M competitions, the competitions reviewed in this study feature daily and weekly time series with exogenous variables, business hierarchy information, or both. Furthermore, the Kaggle data sets all exhibit higher entropy than the M3 and M4 competitions, and they are intermittent.In this review, we confirm the conclusion of the M4 competition that ensemble models using cross-learning tend to outperform local time series models and that gradient boosted decision trees and neural networks are strong forecast methods. Moreover, we present insights regarding the use of external information and validation strategies, and discuss the impacts of data characteristics on the choice of statistics or machine learning methods. Based on these insights, we construct nine ex-ante hypotheses for the outcome of the M5 competition to allow empirical validation of our findings. 相似文献
6.
We construct a DSGE-VAR model for competing head to head with the long history of published forecasts of the Reserve Bank of New Zealand. We also construct a Bayesian VAR model with a Minnesota prior for forecast comparison. The DSGE-VAR model combines a structural DSGE model with a statistical VAR model based on the in-sample fit over the majority of New Zealand’s inflation-targeting period. We evaluate the real-time out-of-sample forecasting performance of the DSGE-VAR model, and show that the forecasts from the DSGE-VAR are competitive with the Reserve Bank of New Zealand’s published, judgmentally-adjusted forecasts. The Bayesian VAR model with a Minnesota prior also provides a competitive forecasting performance, and generally, with a few exceptions, out-performs both the DSGE-VAR and the Reserve Bank’s own forecasts. 相似文献
7.
The paper examines volatility activity and its asymmetry and undertakes further specification analysis of volatility models based on it. We develop new nonparametric statistics using high-frequency option-based VIX data to test for asymmetry in volatility jumps. We also develop methods for estimating and evaluating, using price data alone, a general encompassing model for volatility dynamics where volatility activity is unrestricted. The nonparametric application to VIX data, along with model estimation for S&P index returns, suggests that volatility moves are best captured by an infinite variation pure-jump martingale with a symmetric jump compensator around zero. The latter provides a parsimonious generalization of the jump-diffusions commonly used for volatility modeling. 相似文献
8.
9.
This paper compares the mixed-data sampling (MIDAS) and mixed-frequency VAR (MF-VAR) approaches to model specification in the presence of mixed-frequency data, e.g. monthly and quarterly series. MIDAS leads to parsimonious models which are based on exponential lag polynomials for the coefficients, whereas MF-VAR does not restrict the dynamics and can therefore suffer from the curse of dimensionality. However, if the restrictions imposed by MIDAS are too stringent, the MF-VAR can perform better. Hence, it is difficult to rank MIDAS and MF-VAR a priori, and their relative rankings are better evaluated empirically. In this paper, we compare their performances in a case which is relevant for policy making, namely nowcasting and forecasting quarterly GDP growth in the euro area on a monthly basis, using a set of about 20 monthly indicators. It turns out that the two approaches are more complements than substitutes, since MIDAS tends to perform better for horizons up to four to five months, whereas MF-VAR performs better for longer horizons, up to nine months. 相似文献
10.
My comments on the keynote paper by Michael Keane. 相似文献
11.
Improving productive efficiency is an increasingly important determinant of the future of the swine industry in Hawaii. This paper examines the productive efficiency of a sample of swine producers in Hawaii by estimating a stochastic frontier production function and the constant returns to scale (CRS) and variable returns to scale (VRS) output-oriented DEA models. The technical efficiency estimates obtained from the two frontier techniques are compared. The scale properties are also examined under the two approaches. The industry's potential for increasing production through improved efficiency is also discussed. 相似文献
12.
Akaike-type criteria and the reliability of inference: Model selection versus statistical model specification 总被引:1,自引:0,他引:1
Since the 1990s, the Akaike Information Criterion (AIC) and its various modifications/extensions, including BIC, have found wide applicability in econometrics as objective procedures that can be used to select parsimonious statistical models. The aim of this paper is to argue that these model selection procedures invariably give rise to unreliable inferences, primarily because their choice within a prespecified family of models (a) assumes away the problem of model validation, and (b) ignores the relevant error probabilities. This paper argues for a return to the original statistical model specification problem, as envisaged by Fisher (1922), where the task is understood as one of selecting a statistical model in such a way as to render the particular data a truly typical realization of the stochastic process specified by the model in question. The key to addressing this problem is to replace trading goodness-of-fit against parsimony with statistical adequacy as the sole criterion for when a fitted model accounts for the regularities in the data. 相似文献