首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Capturing downside risk in financial markets: the case of the Asian Crisis   总被引:1,自引:0,他引:1  
Using data on Asian equity markets, we observe that during periods of financial turmoil, deviations from the mean-variance framework become more severe, resulting in periods with additional downside risk to investors. Current risk management techniques failing to take this additional downside risk into account will underestimate the true Value-at-Risk with greater severity during periods of financial turnoil. We provide a conditional approach to the Value-at-Risk methodology, known as conditional VaR-x, which to capture the time variation of non-normalities allows for additional tail fatness in the distribution of expected returns. These conditional VaR-x estimates are then compared to those based on the RiskMetrics™ methodology from J.P. Morgan, where we find that the model provides improved forecasts of the Value-at-Risk. We are therefore able to show that our conditional VaR-x estimates are better able to capture the nature of downside risk, particularly crucial in times of financial crises.  相似文献   

2.
In this article, we elaborate some empirical stylized facts of eight emerging stock markets for estimating one-day- and one-week-ahead Value-at-Risk (VaR) in the case of both short- and long-trading positions. We model the emerging equity market returns via APARCH, FIGARCH, and FIAPARCH models under Student-t and skewed Student-t innovations. The FIAPARCH models under skewed Student-t distribution provide the best fit for all the equity market returns. Furthermore, we model the daily and one-week-ahead market risks with the conditional volatilities generated from the FIAPARCH models and document that the skewed Student-t distribution yields the best results in predicting one-day-ahead VaR forecasts for all the stock markets. The results also reveal that the prediction power of the models deteriorate for longer forecasting horizons.  相似文献   

3.
The standard “delta-normal” Value-at-Risk methodology requires that the underlying returns generating distribution for the security in question is normally distributed, with moments which can be estimated using historical data and are time-invariant. However, the stylized fact that returns are fat-tailed is likely to lead to under-prediction of both the size of extreme market movements and the frequency with which they occur. In this paper, we use the extreme value theory to analyze four emerging markets belonging to the MENA region (Egypt, Jordan, Morocco, and Turkey). We focus on the tails of the unconditional distribution of returns in each market and provide estimates of their tail index behavior. In the process, we find that the returns have significantly fatter tails than the normal distribution and therefore introduce the extreme value theory. We then estimate the maximum daily loss by computing the Value-at-Risk (VaR) in each market. Consistent with the results from other developing countries [see Gencay, R. and Selcuk, F., (2004). Extreme value theory and Value-at-Risk: relative performance in emerging markets. International Journal of Forecasting, 20, 287–303; Mendes, B., (2000). Computing robust risk measures in emerging equity markets using extreme value theory. Emerging Markets Quarterly, 4, 25–41; Silva, A. and Mendes, B., (2003). Value-at-Risk and extreme returns in Asian stock markets. International Journal of Business, 8, 17–40], generally, we find that the VaR estimates based on the tail index are higher than those based on a normal distribution for all markets, and therefore a proper risk assessment should not neglect the tail behavior in these markets, since that may lead to an improper evaluation of market risk. Our results should be useful to investors, bankers, and fund managers, whose success depends on the ability to forecast stock price movements in these markets and therefore build their portfolios based on these forecasts.  相似文献   

4.
The Value at Risk (VaR) is a risk measure that is widely used by financial institutions in allocating risk. VaR forecast estimation involves the conditional evaluation of quantiles based on the currently available information. Recent advances in VaR evaluation incorporate conditional variance into the quantile estimation, yielding the Conditional Autoregressive VaR (CAViaR) models. However, the large number of alternative CAViaR models raises the issue of identifying the optimal quantile predictor. To resolve this uncertainty, we propose a Bayesian encompassing test that evaluates various CAViaR models predictions against a combined CAViaR model based on the encompassing principle. This test provides a basis for forecasting combined conditional VaR estimates when there are evidences against the encompassing principle. We illustrate this test using simulated and financial daily return data series. The results demonstrate that there are evidences for using combined conditional VaR estimates when forecasting quantile risk.  相似文献   

5.
We propose a multivariate nonparametric technique for generatingreliable short-term historical yield curve scenarios and confidenceintervals. The approach is based on a Functional Gradient Descent(FGD) estimation of the conditional mean vector and covariancematrix of a multivariate interest rate series. It is computationallyfeasible in large dimensions and it can account for nonlinearitiesin the dependence of interest rates at all available maturities.Based on FGD we apply filtered historical simulation to computereliable out-of-sample yield curve scenarios and confidenceintervals. We back-test our methodology on daily USD bond datafor forecasting horizons from 1 to 10 days. Based on severalstatistical performance measures we find significant evidenceof a higher predictive power of our method when compared toscenarios generating techniques based on (i) factor analysis,(ii) a multivariate CCC-GARCH model, or (iii) an exponentialsmoothing covariances estimator as in the RiskMetricsTM approach.  相似文献   

6.
We propose to forecast the Value-at-Risk of bivariate portfolios using copulas which are calibrated on the basis of nonparametric sample estimates of the coefficient of lower tail dependence. We compare our proposed method to a conventional copula-GARCH model where the parameter of a Clayton copula is estimated via Canonical Maximum-Likelihood. The superiority of our proposed model is exemplified by analyzing a data sample of nine different bivariate and one nine-dimensional financial portfolio. A comparison of the out-of-sample forecasting accuracy of both models confirms that our model yields economically significantly better Value-at-Risk forecasts than the competing parametric calibration strategy.  相似文献   

7.
The study compares the predictive ability of various models in estimating intraday Value-at-Risk (VaR) and Expected Shortfall (ES) using high frequency share price index data from sixteen different countries across the world for a period of seven and half months from September 20, 2013 to May 07, 2014. The main emphasis of the study has been given to Extreme Value Theory (EVT) and to evaluate how well Conditional EVT model performs in modeling tails of distributions and in estimating and forecasting intraday VaR and ES measures. We have followed McNeil and Frey's (2000) two stage approach called Conditional EVT to estimate dynamic intraday VaR and ES. We have compared the accuracy of Conditional EVT approach to intraday VaR and ES estimation with other competing models. The best performing model is found to be the Conditional EVT in estimating both the quantiles for the entire sample. The study is useful for market participants (such as intraday traders and market makers) involved in frequent intraday trading in such equity markets.  相似文献   

8.
This paper draws on six waves of Japanese household longitudinal data (Keio Household Panel Survey, KHPS) and estimates a conditional fixed effects logit model to investigate the effects of housing equity constraints and income shocks on own-to-own residential moves in Japan. By looking at contemporaneous extended Loan-to-Value (ELTV) and extended Debt-to-Income (EDTI) ratios under the recourse loan system, we examine whether housing equity constraints and negative income shocks have any impact on own-to-own residential moves. Taking account of the specific nature of the recourse loan system in Japan, we further investigate whether these effects are different between positive and negative equity households. The estimation results show that housing equity constraints and negative income shocks significantly deter own-to-own residential moves for positive equity households.  相似文献   

9.
In this paper we study volatility functions. Our main assumption is that the volatility is a function of time and is either deterministic, or stochastic but driven by a Brownian motion independent of the stock. Our approach is based on estimation of an unknown function when it is observed in the presence of additive noise. The set up is that the prices are observed over a time interval [0, t], with no observations over (t, T), however there is a value for volatility at T. This value is may be inferred from options, or provided by an expert opinion. We propose a forecasting/interpolating method for such a situation. One of the main technical assumptions is that the volatility is a continuous function, with derivative satisfying some smoothness conditions. Depending on the degree of smoothness there are two estimates, called filters, the first one tracks the unknown volatility function and the second one tracks the volatility function and its derivative. Further, in the proposed model the price of option is given by the Black–Scholes formula with the averaged future volatility. This enables us to compare the implied volatility with the averaged estimated historical volatility. This comparison is done for three companies and has shown that the two estimates of volatility have a weak statistical relation.  相似文献   

10.
11.
In this study, we compare the out-of-sample forecasting performance of several modern Value-at-Risk (VaR) estimators derived from extreme value theory (EVT). Specifically, in a multi-asset study covering 30 years of stock, bond, commodity and currency market data, we analyse the accuracy of the classic generalised Pareto peak over threshold approach and three recently proposed methods based on the Box–Cox transformation, L-moment estimation and the Johnson system of distributions. We find that, in their unconditional form, some of the estimators may be acceptable under current regulatory assessment rules but none of them can continuously pass more advanced tests of forecasting accuracy. In their conditional forms, forecasting power is significantly increased and the Box–Cox method proves to be the most promising estimator. However, it is also important to stress that the traditional historical simulation approach, which is currently the most frequently used VaR estimator in commercial banks, can not only keep up with the EVT-based methods but occasionally even outperforms them (depending on the setting: unconditional versus conditional). Thus, recent claims to generally replace this simple method by theoretically more advanced EVT-based methods may be premature.  相似文献   

12.
The potential of economic variables for financial risk measurement is an open field for research. This article studies the role of market capitalization in the estimation of Value-at-Risk (VaR). We test the performance of different VaR methodologies for portfolios with different market capitalization. We perform the analysis considering separately financial crisis periods and non-crisis periods. We find that VaR methods perform differently for portfolios with different market capitalization. For portfolios with stocks of different sizes we obtain better VaR estimates when taking market capitalization into account. We also find that it is important to consider crisis and non-crisis periods separately when estimating VaR across different sizes. This study provides evidence that market fundamentals are relevant for risk measurement.  相似文献   

13.
Jun Qi  Yiyun Chen 《Quantitative Finance》2013,13(12):2085-2099
This paper develops a new multiple time scale-based empirical framework for market risk estimates and forecasts. Ultra-high frequency data are used in the empirical analysis to estimate the parameters of empirical scaling laws which gives a better understanding of the dynamic nature of the market. A comparison of the new approach with the popular Value-at-Risk and expected tail loss measures with respect to their risk forecasts during the crisis period in 2008 is presented. The empirical results show the outperformance of the new scaling law method which turns out to be more accurate and flexible due to the scale invariance. The scaling law method promotes the use of massive real data in developing risk measurement and forecasting models.  相似文献   

14.
Equity release products are sorely needed in an aging population with high levels of home ownership. There has been a growing literature analyzing risk components and capital adequacy of reverse mortgages in recent years. However, little research has been done on the risk analysis of other equity release products, such as home reversion contracts. This is partly due to the dominance of reverse mortgage products in equity release markets worldwide. In this article we compare cash flows and risk profiles from the provider's perspective for reverse mortgage and home reversion contracts. An at-home/in long-term care split termination model is employed to calculate termination rates, and a vector autoregressive (VAR) model is used to depict the joint dynamics of economic variables including interest rates, house prices, and rental yields. We derive stochastic discount factors from the no arbitrage condition and price the no negative equity guarantee in reverse mortgages and the lease for life agreement in the home reversion plan accordingly. We compare expected payoffs and assess riskiness of these two equity release products via commonly used risk measures: Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR).  相似文献   

15.
We jointly test the effects of two types of investor uncertainty, one related to future firm performance and unrelated to accruals (cash flow uncertainty) and one directly related to accrual estimation errors (accounting quality uncertainty). Distinct from prior studies, our uncertainty estimates are based on a matched‐firm design that minimizes the mechanical relationship between the two uncertainty variables. We find a strong negative relationship between cash flow uncertainty and multiple estimates of the cost of equity capital. With respect to accounting quality uncertainty, we find a strong positive association with both expected stock returns and implied costs of equity, but only in settings that control for cash flow uncertainty. Collectively, our results suggest the need to consider different types of investor uncertainty when examining how investor uncertainty affects the cost of equity capital.  相似文献   

16.
Academic research has highlighted the inherent flaws within the RiskMetrics model and demonstrated the superiority of the GARCH approach in-sample. However, these results do not necessarily extend to forecasting performance. This paper seeks answer to the question of whether RiskMetrics volatility forecasts are adequate in comparison to those obtained from GARCH models. To answer the question stock index data is taken from 31 international markets and subjected to two exercises, a straightforward volatility forecasting exercise and a Value-at-Risk exceptions forecasting competition. Our results provide some simple answers to the above question. When forecasting volatility of the G7 stock markets the APARCH model, in particular, provides superior forecasts that are significantly different from the RiskMetrics models in over half the cases. This result also extends to the European markets with the APARCH model typically preferred. For the Asian markets the RiskMetrics model performs well, and is only significantly dominated by the GARCH models for one market, although there is evidence that the APARCH model provides a better forecast for the larger Asian markets. Regarding the Value-at-Risk exercise, when forecasting the 1% VaR the RiskMetrics model does a poor job and is typically the worst performing model, again the APARCH model does well. However, forecasting the 5% VaR then the RiskMetrics model does provide an adequate performance. In short, the RiskMetrics model only performs well in forecasting the volatility of small emerging markets and for broader VaR measures.  相似文献   

17.
Determining the contributions of sub-portfolios or single exposures to portfolio-wide economic capital for credit risk is an important risk measurement task. Often, economic capital is measured as the Value-at-Risk (VaR) of the portfolio loss distribution. For many of the credit portfolio risk models used in practice, the VaR contributions then have to be estimated from Monte Carlo samples. In the context of a partly continuous loss distribution (i.e. continuous except for a positive point mass on zero), we investigate how to combine kernel estimation methods with importance sampling to achieve more efficient (i.e. less volatile) estimation of VaR contributions.  相似文献   

18.
In this paper, we develop modeling tools to forecast Value-at-Risk and volatility with investment horizons of less than one day. We quantify the market risk based on the study at a 30-min time horizon using modified GARCH models. The evaluation of intraday market risk can be useful to market participants (day traders and market makers) involved in frequent trading. As expected, the volatility features a significant intraday seasonality, which motivates us to include the intraday seasonal indexes in the GARCH models. We also incorporate realized variance (RV) and time-varying degrees of freedom in the GARCH models to capture more intraday information on the volatile market. The intrinsic tail risk index is introduced to assist with understanding the inherent risk level in each trading time interval. The proposed models are evaluated based on their forecasting performance of one-period-ahead volatility and Intraday Value-at-Risk (IVaR) with application to the 30 constituent stocks. We find that models with seasonal indexes generally outperform those without; RV can improve the out-of-sample forecasts of IVaR; student GARCH models with time-varying degrees of freedom perform best at 0.5 and 1 % IVaR, while normal GARCH models excel for 2.5 and 5 % IVaR. The results show that RV and seasonal indexes are useful to forecasting intraday volatility and Intraday VaR.  相似文献   

19.
In this paper we propose a novel Bayesian methodology for Value-at-Risk computation based on parametric Product Partition Models. Value-at-Risk is a standard tool for measuring and controlling the market risk of an asset or portfolio, and is also required for regulatory purposes. Its popularity is partly due to the fact that it is an easily understood measure of risk. The use of Product Partition Models allows us to remain in a Normal setting even in the presence of outlying points, and to obtain a closed-form expression for Value-at-Risk computation. We present and compare two different scenarios: a product partition structure on the vector of means and a product partition structure on the vector of variances. We apply our methodology to an Italian stock market data set from Mib30. The numerical results clearly show that Product Partition Models can be successfully exploited in order to quantify market risk exposure. The obtained Value-at-Risk estimates are in full agreement with Maximum Likelihood approaches, but our methodology provides richer information about the clustering structure of the data and the presence of outlying points.  相似文献   

20.
In this paper we analyse recovery rates on defaulted bonds using the Standard & Poor's/PMD database for the years 1981–1999. Due to the specific nature of the data (observations lie within 0 and 1), we must rely on nonstandard econometric techniques. The recovery rate density is estimated nonparametrically using a beta kernel method. This method is free of boundary bias, and Monte Carlo comparison with competing nonparametric estimators show that the beta kernel density estimator is particularly well suited for density estimation on the unit interval. We challenge the usual market practice to model parametrically recovery rates using a beta distribution calibrated on the empirical mean and variance. This assumption is unable to replicate multimodal distributions or concentration of data at total recovery and total loss. We evaluate the impact of choosing the beta distribution on the estimation of credit Value-at-Risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号