首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We employ four various GARCH-type models, incorporating the skewed generalized t (SGT) errors into those returns innovations exhibiting fat-tails, leptokurtosis and skewness to forecast both volatility and value-at-risk (VaR) for Standard & Poor's Depositary Receipts (SPDRs) from 2002 to 2008. Empirical results indicate that the asymmetric EGARCH model is the most preferable according to purely statistical loss functions. However, the mean mixed error criterion suggests that the EGARCH model facilitates option buyers for improving their trading position performance, while option sellers tend to favor the IGARCH/EGARCH model at shorter/longer trading horizon. For VaR calculations, although these GARCH-type models are likely to over-predict SPDRs' volatility, they are, nevertheless, capable of providing adequate VaR forecasts. Thus, a GARCH genre of model with SGT errors remains a useful technique for measuring and managing potential losses on SPDRs under a turbulent market scenario.  相似文献   

2.
We employ four various GARCH-type models, incorporating the skewed generalized t (SGT) errors into those returns innovations exhibiting fat-tails, leptokurtosis and skewness to forecast both volatility and value-at-risk (VaR) for Standard & Poor's Depositary Receipts (SPDRs) from 2002 to 2008. Empirical results indicate that the asymmetric EGARCH model is the most preferable according to purely statistical loss functions. However, the mean mixed error criterion suggests that the EGARCH model facilitates option buyers for improving their trading position performance, while option sellers tend to favor the IGARCH/EGARCH model at shorter/longer trading horizon. For VaR calculations, although these GARCH-type models are likely to over-predict SPDRs' volatility, they are, nevertheless, capable of providing adequate VaR forecasts. Thus, a GARCH genre of model with SGT errors remains a useful technique for measuring and managing potential losses on SPDRs under a turbulent market scenario.  相似文献   

3.
This study evaluates the sector risk of the Qatar Stock Exchange (QSE), a recently upgraded emerging stock market, using value-at-risk models for the 7 January 2007–18 October 2015 period. After providing evidence for true long memory in volatility using the log-likelihood profile test of Qu and splitting the sample and dth differentiation tests of Shimotsu, we compare the FIGARCH, HYGARCH and FIAPARCH models under normal, Student-t and skewed-t innovation distributions based on in and out-of-sample VaR forecasts. The empirical results show that the skewed Student-t FIGARCH model generates the most accurate prediction of one-day-VaR forecasts. The policy implications for portfolio managers are also discussed.  相似文献   

4.
We introduce new Markov-switching (MS) dynamic conditional score (DCS) exponential generalized autoregressive conditional heteroscedasticity (EGARCH) models, to be used by practitioners for forecasting value-at-risk (VaR) and expected shortfall (ES) in systematic risk analysis. We use daily log-return data from the Standard & Poor’s 500 (S&P 500) index for the period 1950–2016. The analysis of the S&P 500 is useful, for example, for investors of (i) well-diversified US equity portfolios; (ii) S&P 500 futures and options traded at Chicago Mercantile Exchange Globex; (iii) exchange traded funds (ETFs) related to the S&P 500. The new MS DCS-EGARCH models are alternatives to of the recent MS Beta-t-EGARCH model that uses the symmetric Student’s t distribution for the error term. For the new models, we use more flexible asymmetric probability distributions for the error term: Skew-Gen-t (skewed generalized t), EGB2 (exponential generalized beta of the second kind) and NIG (normal-inverse Gaussian) distributions. For all MS DCS-EGARCH models, we identify high- and low-volatility periods for the S&P 500. We find that the statistical performance of the new MS DCS-EGARCH models is superior to that of the MS Beta-t-EGARCH model. As a practical application, we perform systematic risk analysis by forecasting VaR and ES.

Abbreviation Single regime (SR); Markov-switching (MS); dynamic conditional score (DCS); exponential generalized autoregressive conditional heteroscedasticity (EGARCH); value-at-risk (VaR); expected shortfall (ES); Standard & Poor's 500 (S&P 500); exchange traded funds (ETFs); Skew-Gen-t (skewed generalized t); EGB2 (exponential generalized beta of the second kind); NIG (normal-inverse Gaussian); log-likelihood (LL); standard deviation (SD); partial autocorrelation function (PACF); likelihood-ratio (LR); ordinary least squares (OLS); heteroscedasticity and autocorrelation consistent (HAC); Akaike information criterion (AIC); Bayesian information criterion (BIC); Hannan-Quinn criterion (HQC).  相似文献   


5.
This paper outlines a theory of what might be going wrong in the monetary SVAR (structural vector autoregression) literature and provides supporting empirical evidence. The theory is that macroeconomists may be attempting to identify structural forms that do not exist, given the true distribution of the innovations in the reduced-form VAR. This paper shows that this problem occurs whenever (1) some innovation in the VAR has an infinite-variance distribution and (2) the matrix of coefficients on the contemporaneous terms in the VAR's structural form is nonsingular. Since (2) is almost always required for SVAR analysis, it is germane to test hypothesis (1). Hence, in this paper, we fit α-stable distributions to the residuals from 3-lag and 12-lag monetary VARs, and, using a parametric-bootstrap method, we reject the null hypotheses of finite variance (or equivalently, α = 2) for all 12 error terms in the two VARs. These results are mostly robust to a sample break at the February 1984 observations. Moreover, ARCH tests suggest that the shocks from the subperiod VARs are homoskedastic in seven of 24 instances. Next, we compare the fits of the α-stable distributions with those of t distributions and a GARCH(1,1) shock model. This analysis suggests that the time-invariant α-stable distributions provide the best fits for two of six shocks in the VAR(12) specification and three of six shocks in the VAR(3). Finally, we use the GARCH model as a filter to obtain homoskedastic shocks, which also prove to have α < 2, according to ML estimates.  相似文献   

6.
This article proposes a threshold stochastic volatility model that generates volatility forecasts specifically designed for value at risk (VaR) estimation. The method incorporates extreme downside shocks by modelling left-tail returns separately from other returns. Left-tail returns are generated with a t-distributional process based on the historically observed conditional excess kurtosis. This specification allows VaR estimates to be generated with extreme downside impacts, yet remains empirically widely applicable. This article applies the model to daily returns of seven major stock indices over a 22-year period and compares its forecasts to those of several other forecasting methods. Based on back-testing outcomes and likelihood ratio tests, the new model provides reliable estimates and outperforms others.  相似文献   

7.
This paper investigates stock–bond portfolios' tail risks such as value-at-risk (VaR) and expected shortfall (ES), and the way in which these measures have been affected by the global financial crisis. The semiparametric t-copulas adequately model stock–bond returns joint distributions of G7 countries and Australia. Empirical results show that the (negative) weak stock–bond returns dependence has increased significantly for seven countries after the crisis, except for Italy. However, both VaR and ES have increased for all eight countries. Before the crisis, the minimum portfolio VaR and ES were achieved at an interior solution only for the US, the UK, Australia, Canada and Italy. After the crisis, the corner solution was found for all eight countries. Evidence of “flight to quality” and “safety first” investor behaviour was strong, after the global financial crisis. The semiparametric t-copula adequately forecasts the outer-sample VaR. These findings have implications for global financial regulators and the Basel Committee, whose central focus is currently on increasing the capital requirements as a consequence of the recent global financial crisis.  相似文献   

8.
This article extends the quasi-autoregressive (QAR) plus Beta-t-EGARCH (exponential generalized autoregressive conditional heteroscedasticity) dynamic conditional score (DCS) model. For the new DCS model, the degrees of freedom parameter is time varying and tail thickness of the error term is updated by the conditional score. We compare the performance of QAR plus Beta-t-EGARCH with constant degrees of freedom (benchmark model) and QAR plus Beta-t-EGARCH with time-varying degrees of freedom (extended model). We use data from the Standard and Poor’s 500 (S&P 500) index, and a random sample of its 150 components that are from different industries of the United States (US) economy. For the S&P 500, all likelihood-based model selection criteria support the extended model, which identifies extreme events with significant impact on the US stock market. We find that for 59% of the 150 firms, the extended model has a superior statistical performance. The results suggest that the extended model is superior for those industries, which produce products that people usually are unwilling to cut out of their budgets, regardless of their financial situation. We perform an application to compare the density forecast performance of both DCS models. We perform an application to Monte Carlo value-at-risk for both DCS models.  相似文献   

9.
This article considers modelling nonnormality in return with stable Paretian (SP) innovations in generalized autoregressive conditional heteroskedasticity (GARCH), exponential generalized autoregressive conditional heteroskedasticity (EGARCH) and Glosten-Jagannathan-Runkle generalized autoregressive conditional heteroskedasticity (GJR-GARCH) volatility dynamics. The forecasted volatilities from these dynamics have been used as a proxy to the volatility parameter of the Black–Scholes (BS) model. The performance of these proxy-BS models has been compared with the performance of the BS model of constant volatility. Using a cross section of S&P500 options data, we find that EGARCH volatility forecast with SP innovations is an excellent proxy to BS constant volatility in terms of pricing. We find improved performance of hedging for an illustrative option portfolio. We also find better performance of spectral risk measure (SRM) than value-at-risk (VaR) and expected shortfall (ES) in estimating option portfolio risk in case of the proxy-BS models under SP innovations.

Abbreviation: generalized autoregressive conditional heteroskedasticity (GARCH), exponential generalized autoregressive conditional heteroskedasticity (EGARCH) and Glosten-Jagannathan-Runkle generalized autoregressive conditional heteroskedasticity (GJR-GARCH)  相似文献   


10.
This paper is concerned with linear portfolio value-at-risk (VaR) and expected shortfall (ES) computation when the portfolio risk factors are leptokurtic, imprecise and/or vague. Following Yoshida (2009), the risk factors are modeled as fuzzy random variables in order to handle both their random variability and their vagueness. We discuss and extend the Yoshida model to some non-Gaussian distributions and provide associated ES. Secondly, assuming that the risk factors' degree of imprecision changes over time, original fuzzy portfolio VaR and ES models are introduced. For a given subjectivity level fixed by the investor, these models allow the computation of a pessimistic and an optimistic estimation of the value-at-risk and of the expected shortfall. Finally, some empirical examples carried out on three portfolios constituted by some chosen French stocks, show the effectiveness of the proposed methods.  相似文献   

11.
Volatility and VaR forecasting in the Madrid Stock Exchange   总被引:1,自引:0,他引:1  
This paper provides an empirical study to assess the forecasting performance of a wide range of models for predicting volatility and VaR in the Madrid Stock Exchange. The models performance was measured by using different loss functions and criteria. The results show that FIAPARCH processes capture and forecast more accurately the dynamics of IBEX-35 returns volatility. It is also observed that assuming a heavy-tailed distribution does not improve models ability for predicting volatility. However, when the aim is forecasting VaR, we find evidence of that the Student’s t FIAPARCH outperforms the models it nests the lower the target quantile.   相似文献   

12.
In this paper, we attempt to find the most important factor causing the differences in the performance of Value‐at‐Risk (VaR) estimation by comparing the performances of conditional and unconditional approaches. For each approach, we use various methods and models with different degrees of flexibility in their distributions including SU‐normal distribution, which is one of the most flexible distribution functions. Our empirical results underscore the importance of the flexibility‐of‐distribution function in VaR estimation models. Even though it seems to be unclear which approach is better between conditional and unconditional approaches, it seems to be clear that the more flexible distribution we use, the better the performance, regardless of which approach we use.  相似文献   

13.
Over the past few decades, much progress has been made in semiparametric modelling and estimation methods for econometric analysis. This paper is concerned with inference (i.e. confidence intervals and hypothesis testing) in semiparametric models. In contrast to the conventional approach based on t‐ratios, we advocate likelihood‐based inference. In particular, we study two widely applied semiparametric problems, weighted average derivatives and treatment effects, and propose semiparametric empirical likelihood and jackknife empirical likelihood methods. We derive the limiting behaviour of these empirical likelihood statistics and investigate their finite sample performance through Monte Carlo simulation. Furthermore, we extend the (delete‐1) jackknife empirical likelihood toward the delete‐d version with growing d and establish general asymptotic theory. This extension is crucial to deal with non‐smooth objects, such as quantiles and quantile average derivatives or treatment effects, due to the well‐known inconsistency phenomena of the jackknife under non‐smoothness.  相似文献   

14.
ABSTRACT

This paper investigates whether and how diversified firms in the Information and Communication Technology (ICT) sector innovate in green technologies and assess the potential impact of these innovations on firm performance. The analysis relies on a balanced panel dataset of European ICT firms in the period 2009–2013. The results suggest an inverted u-shaped relationship between the extent of technological diversification and the likelihood to develop green technologies. Technological diversification generally increases the likelihood of green innovations, but too high a dispersion of resources across a large variety of different technologies decreases the intensity of green innovations. The results show also that the development of green technologies is positively associated with firm performance: ICT firms involved in green patenting activity perform better than ICT firms with no green patents.  相似文献   

15.
In this paper we explore important implications of capturing volatility risk premium (VRP) within a parametric GARCH setting. We study the transmission mechanism of shocks from returns to risk-neutral volatility by providing an examination of the news-impact curves and impulse–response functions of risk-neutral volatility, in order to better understand how option prices respond to return innovations. We report a value of − 3% for the magnitude of the average VRP and we recover the empirical densities under physical and risk-neutral measures. Allowing for VRP is crucial for adding flexibility to the shape of the two distributions. In our estimation procedure, we adopt a MLE approach that incorporates both physical return and risk-neutral VIX dynamics. By introducing volatility - instead of variance - innovations in the joint likelihood function and by allowing for contemporaneous correlation between innovations in returns and the VIX we show that we may critically reduce the bias and improve the efficiency of the joint maximum likelihood estimator, especially for the parameters of the volatility process. Modeling returns and the VIX as a bi-variate normal permits identification of a contemporaneous correlation coefficient of approximately − 30% between returns and risk-neutral volatility.  相似文献   

16.
We suggest a Markov regime-switching (MS) Beta-t-EGARCH (exponential generalized autoregressive conditional heteroscedasticity) model for U.S. stock returns. We compare the in-sample statistical performance of the MS Beta-t-EGARCH model with that of the single-regime Beta-t-EGARCH model. For both models we consider leverage effects for conditional volatility. We use data from the Standard Poor’s 500 (S&P 500) index and also a random sample that includes 50 components of the S&P 500. We study the outlier-discounting property of the single-regime Beta-t-EGARCH and MS Beta-t-EGARCH models. For the S&P 500, we show that for the MS Beta-t-EGARCH model extreme observations are discounted more for the low-volatility regime than for the high-volatility regime. The conditions of consistency and asymptotic normality of the maximum likelihood estimator are satisfied for both the single-regime and MS Beta-t-EGARCH models. All likelihood-based in-sample statistical performance metrics suggest that the MS Beta-t-EGARCH model is superior to the single-regime Beta-t-EGARCH model. We present an application to the out-of-sample density forecast performance of both models. The results show that the density forecast performance of the MS Beta-t-EGARCH model is superior to that of the single-regime Beta-t-EGARCH model.  相似文献   

17.
This study reviews estimation methods for the infinite horizon discrete choice dynamic programming models and conducts Monte Carlo experiments. We consider: the maximum likelihood estimator (MLE), the two‐step conditional choice probabilities estimator, sequential estimators based on policy iterations mapping under finite dependence, and sequential estimators based on value iteration mappings. Our simulation result shows that the estimation performance of the sequential estimators based on policy iterations and value iteration mappings is largely comparable to the MLE, while they achieve substantial computation gains over the MLE by a factor of 100 for a model with a moderately large state space.  相似文献   

18.
Non parametric VaR Techniques. Myths and Realities   总被引:2,自引:0,他引:2  
VaR (value-at-risk) estimates are currently based on two main techniques: the variance-covariance approach or simulation. Statistical and computational problems affect the reliability of these techniques. We illustrate a new technique – filtered historical simulation (FHS) – designed to remedy some of the shortcomings of the simulation approach. We compare the estimates it produces with traditional bootstrapping estimates.
(J.E.L.: G19).  相似文献   

19.
We examine and compare a large number of generalized autoregressive conditional heteroskedastic (GARCH) and stochastic volatility (SV) models using series of Bitcoin and Litecoin price returns to assess the model fit for dynamics of these cryptocurrency price returns series. The various models examined include the standard GARCH(1,1) and SV with an AR(1) log-volatility process, as well as more flexible models with jumps, volatility in mean, leverage effects, t-distributed and moving average innovations. We report that the best model for Bitcoin is SV-t while it is GARCH-t for Litecoin. Overall, the t-class of models performs better than other classes for both cryptocurrencies. For Bitcoin, the SV models consistently outperform the GARCH models and the same holds true for Litecoin in most cases. Finally, the comparison of GARCH models with GARCH-GJR models reveals that the leverage effect is not significant for cryptocurrencies, suggesting that these do not behave like stock prices.  相似文献   

20.
We introduce a new type of heavy‐tailed distribution, the normal reciprocal inverse Gaussian distribution (NRIG), to the GARCH and Glosten‐Jagannathan‐Runkle (1993) GARCH models, and compare its empirical performance with two other popular types of heavy‐tailed distribution, the Student's t distribution and the normal inverse Gaussian distribution (NIG), using a variety of asset return series. Our results illustrate that there is no overwhelmingly dominant distribution in fitting the data under the GARCH framework, although the NRIG distribution performs slightly better than the other two types of distribution. For market indexes series, it is important to introduce both GJR‐terms and the NRIG distribution to improve the models’ performance, but it is ambiguous for individual stock prices series. Our results also show the GJR‐GARCH NRIG model has practical advantages in quantitative risk management. Finally, the convergence of numerical solutions in maximum‐likelihood estimation of GARCH and GJR‐GARCH models with the three types of heavy‐tailed distribution is investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号