排序方式: 共有44条查询结果,搜索用时 17 毫秒
21.
Paralleling regulatory developments, we devise value-at-risk and expected shortfall type risk measures for the potential losses arising from using misspecified models when pricing and hedging contingent claims. Essentially, P&L from model risk corresponds to P&L realized on a perfectly hedged position. Model uncertainty is expressed by a set of pricing models, each of which represents alternative asset price dynamics to the model used for pricing. P&L from model risk is determined relative to each of these models. Using market data, a unified loss distribution is attained by weighing models according to a likelihood criterion involving both calibration quality and model parsimony. Examples demonstrate the magnitude of model risk and corresponding capital buffers necessary to sufficiently protect trading book positions against unexpected losses from model risk. A further application of the model risk framework demonstrates the calculation of gap risk of a barrier option when employing a semi-static hedging strategy. 相似文献
22.
Chee Yeow Lim Patricia Mui-Siang Tan 《Review of Quantitative Finance and Accounting》2007,29(4):353-370
The SEC issued FRR No. 48 in 1997 to enhance public disclosure of firms’ exposures to market risk. We examine whether the
quantitative value-at-risk (VAR) estimates disclosed by 81 non-financial firms during the period 1997–2002 are value-relevant
using the earnings-returns relation. The empirical results indicate that high VAR is associated with weaker earnings-returns
relation. Further analysis shows that VAR is positively and significantly associated with future stock return volatility.
Our evidence suggests that investors perceive the earnings of firms with substantial market risk exposure to be less persistent,
and adjust the future abnormal earnings for the higher risk exposure. Thus, this results in a lower expected rate of return.
相似文献
Chee Yeow LimEmail: |
23.
Using data on Asian equity markets, we observe that during periods of financial turmoil, deviations from the mean-variance framework become more severe, resulting in periods with additional downside risk to investors. Current risk management techniques failing to take this additional downside risk into account will underestimate the true Value-at-Risk with greater severity during periods of financial turnoil. We provide a conditional approach to the Value-at-Risk methodology, known as conditional VaR-x, which to capture the time variation of non-normalities allows for additional tail fatness in the distribution of expected returns. These conditional VaR-x estimates are then compared to those based on the RiskMetrics™ methodology from J.P. Morgan, where we find that the model provides improved forecasts of the Value-at-Risk. We are therefore able to show that our conditional VaR-x estimates are better able to capture the nature of downside risk, particularly crucial in times of financial crises. 相似文献
24.
Ibrahim Ergen 《Quantitative Finance》2015,15(6):1013-1030
This paper proposes a two-step methodology for Value-at-Risk prediction. The first step involves estimation of a GARCH model using quasi-maximum likelihood estimation and the second step uses model filtered returns with the skewed t distribution of Azzalini and Capitanio [J. R. Stat. Soc. B, 2003, 65, 367–389]. The predictive performance of this method is compared to the single-step joint estimation of the same data generating process, to the well-known GARCH-Evt model and to a comprehensive set of other market risk models. Backtesting results show that the proposed two-step method outperforms most benchmarks including the classical joint estimation method of same data generating process and it performs competitively with respect to the GARCH-Evt model. This paper recommends two robust models to risk managers of emerging market stock portfolios. Both models are estimated in two steps: the GJR-GARCH-Evt model and the two-step GARCH-St model proposed in this study. 相似文献
25.
In this study, eight generalized autoregressive conditional heteroskedasticity (GARCH) types of variance specifications and two return distribution settings, the normal and skewed generalized Student's t (SGT) of Theodossiou (1998), totaling nine GARCH-based models, are utilized to forecast the volatility of six stock indices, and then both the out-of-sample-period value-at-risk (VaR) and the expected shortfall (ES) are estimated following the rolling window approach. Moreover, the in-sample VaR is estimated for both the global financial crisis (GFC) period and the non-GFC period. Subsequently, through several accuracy measures, nine models are evaluated in order to explore the influence of long memory, leverage, and distribution effects on the performance of VaR and ES forecasts. As shown by the empirical results of the nine models, the long memory, leverage, and distribution effects subsist in the stock markets. Moreover, regarding the out-of-sample VaR forecasts, long memory is the most important effect, followed by the leverage effect for the low level, whereas the distribution effect is crucial for the high level. As for the three VaR approaches, weighted historical simulation achieves the best VaR forecasting performance, followed by filtered historical simulation, whereas the parametric approach has the worst VaR forecasting performance for all the levels. Furthermore, VaR models underestimate the true risk, whereas ES models overestimate the true risk, indicating that the ES risk measure is more conservative than the VaR risk measure. Additionally, based on back-testing, the VaR provides a better risk forecast than the ES since the ES highly overestimates the true risk. Notably, long memory is important for the ES estimate, whereas both the long memory and the leverage effect are crucial for the VaR estimate. Finally, via in-sample VaR forecasts in regard to the low level, it is found that long memory is important for the non-GFC period, whereas the distribution effect is crucial for the GFC period. On the other hand, with regard to the high level, the distribution effect is crucial for both the non-GFC and the GFC period. These results seem to be consistent with those found in the out-of-sample VaR forecasts. In accordance with these results, several important policy implications are proposed in this study. 相似文献
26.
Wolfgang Aussenegg Tatiana Miazhynskaia 《Financial Markets and Portfolio Management》2006,20(3):243-264
This study evaluates a set of parametric and non-parametric value-at-risk (VaR) models that quantify the uncertainty in VaR estimates in form of a VaR distribution. We propose a new VaR approach based on Bayesian statistics in a GARCH volatility modeling environment. This Bayesian approach is compared with other parametric VaR methods (quasi-maximum likelihood and bootstrap resampling on the basis of GARCH models) as well as with non-parametric historical simulation approaches (classical and volatility adjusted). All these methods are evaluated based on the frequency of failures and the uncertainty in VaR estimates.Within the parametric methods, the Bayesian approach is better able to produce adequate VaR estimates, and results mostly in a smaller VaR variability. The non-parametric methods imply more uncertain 99%-VaR estimates, but show good performance with respect to 95%-VaRs. 相似文献
27.
Value-at-risk (VaR) has been playing the role of a standard risk measure since its introduction. In practice, the delta-normal approach is usually adopted to approximate the VaR of portfolios with option positions. Its effectiveness, however, substantially diminishes when the portfolios concerned involve a high dimension of derivative positions with nonlinear payoffs; lack of closed form pricing solution for these potentially highly correlated, American-style derivatives further complicate the problem. This paper proposes a generic simulation-based algorithm for VaR estimation that can be easily applied to any existing procedures. Our proposal leverages cross-sectional information and applies variable selection techniques to simplify the existing simulation framework. Asymptotic properties of the new approach demonstrate faster convergence due to the additional model selection component introduced. We have also performed sets of numerical results that verify the effectiveness of our approach in comparison with some existing strategies. 相似文献
28.
Determining risk contributions of unit exposures to portfolio-wide economic capital is an important task in financial risk management. Computing risk contributions involves difficulties caused by rare-event simulations. In this study, we address the problem of estimating risk contributions when the total risk is measured by value-at-risk (VaR). Our proposed estimator of VaR contributions is based on the Metropolis-Hasting (MH) algorithm, which is one of the most prevalent Markov chain Monte Carlo (MCMC) methods. Unlike existing estimators, our MH-based estimator consists of samples from the conditional loss distribution given a rare event of interest. This feature enhances sample efficiency compared with the crude Monte Carlo method. Moreover, our method has consistency and asymptotic normality, and is widely applicable to various risk models having a joint loss density. Our numerical experiments based on simulation and real-world data demonstrate that in various risk models, even those having high-dimensional (≈500) inhomogeneous margins, our MH estimator has smaller bias and mean squared error when compared with existing estimators. 相似文献
29.
This paper proposes a novel extension of log and exponential GARCH models, where time-varying parameters are approximated by orthogonal polynomial systems. These expansions enable us to add and study the effects of market-wide and external international shocks on the volatility forecasts and provide a flexible mechanism to capture various dynamics of the parameters. We examine the performance of the new model in both theoretical and empirical analysis. We investigate the asymptotic properties of the quasi-maximum likelihood estimators under mild conditions. The small-sample behavior of the estimators is studied via Monte Carlo simulation. The performance of the proposed models, in terms of accuracy of both volatility estimation and Value-at-Risk forecasts, is assessed in an empirical study of a set of major stock market indices. The results support the proposed specifications with respect to the corresponding constant-parameters models and to other time-varying parameter models. 相似文献
30.
Hung-Hsi Huang 《The GENEVA Risk and Insurance Review》2006,31(2):91-110
This study develops an optimal insurance contract endogenously under a value-at-risk (VaR) constraint. Although Wang et al.
[2005] had examined this problem, their assumption implied that the insured is risk neutral. Consequently, this study extends
Wang et al. [2005] and further considers a more realistic situation where the insured is risk averse. The study derives the
optimal insurance contract as a single deductible insurance when the VaR constraint is redundant or as a double deductible insurance when the VaR constraint is binding. Finally, this study discusses the optimal coverage level from common forms of insurances,
including deductible insurance, upper-limit insurance, and proportional coinsurance.
JEL Classification G22 相似文献