首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到14条相似文献,搜索用时 0 毫秒
1.
The loss distribution approach is one of the three advanced measurement approaches to the Pillar I modeling proposed by Basel II in 2001. In this paper, one possible approximation of the aggregate and maximum loss distribution in the extremely low frequency/high severity case is given, i.e. the case of infinite mean of the loss sizes and loss inter-arrival times. In this study, independent but not identically distributed losses are considered. The minimum loss amount is considered increasing over time. A Monte Carlo simulation algorithm is presented and several quantiles are estimated. The same approximation is used for modeling the maximum and aggregate worldwide economy losses caused by very rare and very extreme events such as 9/11, the Russian rouble crisis, and the U.S. subprime mortgage crisis. The model parameters are fit on a data sample of operational losses. The respective aggregate and extremal loss quantiles are calculated.  相似文献   

2.
Realized measures employing intra-day sources of data have proven effective for dynamic volatility and tail-risk estimation and forecasting. Expected shortfall (ES) is a tail risk measure, now recommended by the Basel Committee, involving a conditional expectation that can be semi-parametrically estimated via an asymmetric sum of squares function. The conditional autoregressive expectile class of model, used to implicitly model ES, has been extended to allow the intra-day range, not just the daily return, as an input. This model class is here further extended to incorporate information on realized measures of volatility, including realized variance and realized range (RR), as well as scaled and smoothed versions of these. An asymmetric Gaussian density error formulation allows a likelihood that leads to direct estimation and one-step-ahead forecasts of quantiles and expectiles, and subsequently of ES. A Bayesian adaptive Markov chain Monte Carlo method is developed and employed for estimation and forecasting. In an empirical study forecasting daily tail risk measures in six financial market return series, over a seven-year period, models employing the RR generate the most accurate tail risk forecasts, compared to models employing other realized measures as well as to a range of well-known competitors.  相似文献   

3.
Jump Spillover in International Equity Markets   总被引:1,自引:0,他引:1  
In this article we study jump spillover effects between a numberof country equity indexes. In order to identify the latent historicaljumps of each index, we use a Bayesian approach to estimatea jump-diffusion model on each index. We look at the simultaneousjump intensities of pairs of countries and the probabilitiesthat jumps in large countries cause jumps or unusually largereturns in other countries. In all cases, we find significantevidence of jump spillover. In addition, we find that jump spilloverseems to be particularly large between countries that belongto the same regions and have similar industry structures, whereas,interestingly, the sample correlations between the countrieshave difficulties in capturing the jump spillover effects.  相似文献   

4.
Determining risk contributions of unit exposures to portfolio-wide economic capital is an important task in financial risk management. Computing risk contributions involves difficulties caused by rare-event simulations. In this study, we address the problem of estimating risk contributions when the total risk is measured by value-at-risk (VaR). Our proposed estimator of VaR contributions is based on the Metropolis-Hasting (MH) algorithm, which is one of the most prevalent Markov chain Monte Carlo (MCMC) methods. Unlike existing estimators, our MH-based estimator consists of samples from the conditional loss distribution given a rare event of interest. This feature enhances sample efficiency compared with the crude Monte Carlo method. Moreover, our method has consistency and asymptotic normality, and is widely applicable to various risk models having a joint loss density. Our numerical experiments based on simulation and real-world data demonstrate that in various risk models, even those having high-dimensional (≈500) inhomogeneous margins, our MH estimator has smaller bias and mean squared error when compared with existing estimators.  相似文献   

5.
We formulate a mean-variance portfolio selection problem that accommodates qualitative input about expected returns and provide an algorithm that solves the problem. This model and algorithm can be used, for example, when a portfolio manager determines that one industry will benefit more from a regulatory change than another but is unable to quantify the degree of difference. Qualitative views are expressed in terms of linear inequalities among expected returns. Our formulation builds on the Black-Litterman model for portfolio selection. The algorithm makes use of an adaptation of the hit-and-run method for Markov chain Monte Carlo simulation. We also present computational results that illustrate advantages of our approach over alternative heuristic methods for incorporating qualitative input.  相似文献   

6.
Bond rating Transition Probability Matrices (TPMs) are built over a one-year time-frame and for many practical purposes, like the assessment of risk in portfolios or the computation of banking Capital Requirements (e.g. the new IFRS 9 regulation), one needs to compute the TPM and probabilities of default over a smaller time interval. In the context of continuous time Markov chains (CTMC) several deterministic and statistical algorithms have been proposed to estimate the generator matrix. We focus on the Expectation-Maximization (EM) algorithm by Bladt and Sorensen. [J. R. Stat. Soc. Ser. B (Stat. Method.), 2005, 67, 395–410] for a CTMC with an absorbing state for such estimation. This work’s contribution is threefold. Firstly, we provide directly computable closed form expressions for quantities appearing in the EM algorithm and associated information matrix, allowing to easy approximation of confidence intervals. Previously, these quantities had to be estimated numerically and considerable computational speedups have been gained. Secondly, we prove convergence to a single set of parameters under very weak conditions (for the TPM problem). Finally, we provide a numerical benchmark of our results against other known algorithms, in particular, on several problems related to credit risk. The EM algorithm we propose, padded with the new formulas (and error criteria), outperforms other known algorithms in several metrics, in particular, with much less overestimation of probabilities of default in higher ratings than other statistical algorithms.  相似文献   

7.
This paper provides the explicit solution to the three-factor diffusion model recently proposed by the Danish Society of Actuaries to the Danish industry of life insurance and pensions. The solution is obtained by use of the known general solution to multidimensional linear stochastic differential equation systems. With offset in the explicit solution, we establish the conditional distribution of the future state variables which allows for exact simulation. Using exact simulation, we illustrate how simulation of the system can be improved compared to a standard Euler scheme. In order to analyze the effect of choosing the exact simulation scheme over the traditional Euler approximation scheme frequently applied by practitioners, we carry out a simulation study. We show that due to its recursive nature, the Euler scheme becomes computationally expensive as it requires a small step size in order to minimize discretization errors. Using our exact simulation scheme, one is able to cut these computational costs significantly and obtain even better forecasts. As probability density tail behavior is key to expected investment portfolio performance, we further conduct a risk analysis in which we compare well-known risk measures under both schemes. Finally, we conduct a sensitivity analysis and find that the relative performance of the two schemes depends on the chosen model parameter estimates.  相似文献   

8.
This article presents joint econometric analysis of interest rate risk, issuer‐specific risk (credit risk) and bond‐specific risk (liquidity risk) in a reduced‐form framework. We estimate issuer‐specific and bond‐specific risk from corporate bond data in the German market. We find that bond‐specific risk plays a crucial role in the pricing of corporate bonds. We observe substantial differences between different bonds with respect to the relative influence of issuer‐specific vs. bond‐specific spread on the level and the volatility of the total spread. Issuer‐specific risk exhibits strong autocorrelation and a strong impact of weekday effects, the level of the risk‐free term structure and the debt to value ratio. Moreover, we can observe some impact of the stock market volatility, the respective stock's return and the distance to default. For the bond‐specific risk we find strong autocorrelation, some impact of the stock market index, the stock market volatility, weekday effects and monthly effects as well as a very weak impact of the risk‐free term structure and the specific stock's return. Altogether, the determinants of the spread components vary strongly between different bonds/issuers.  相似文献   

9.
This paper investigates the time-varying behavior of systematic risk for 18 pan-European sectors. Using weekly data over the period 1987–2005, six different modeling techniques in addition to the standard constant coefficient model are employed: a bivariate t-GARCH(1,1) model, two Kalman filter (KF)-based approaches, a bivariate stochastic volatility model estimated via the efficient Monte Carlo likelihood technique as well as two Markov switching models. A comparison of ex-ante forecast performances of the different models indicate that the random walk process in connection with the KF is the preferred model to describe and forecast the time-varying behavior of sector betas in a European context.  相似文献   

10.
This paper investigates the role of stochastic volatility and return jumps in reproducing the volatility dynamics and the shape characteristics of the Korean Composite Stock Price Index (KOSPI) 200 returns distribution. Using efficient method of moments and reprojection analysis, we find that stochastic volatility models, both with and without return jumps, capture return dynamics surprisingly well. The stochastic volatility model without return jumps, however, cannot fully reproduce the conditional kurtosis implied by the data. Return jumps successfully complement this gap. We also find that return jumps are essential in capturing the volatility smirk effects observed in short-term options.
Sol KimEmail:
  相似文献   

11.
Loss Reserving is a major topic of actuarial sciences with a long tradition and well-established methods – both in science and in practice. With the implementation of Solvency II, stochastic methods and modelling the stochastic behaviour of individual claim portfolios will receive additional attention. The author has recently proposed a three-dimensional (3D) stochastic model of claim development. It models a reasonable claim process from first principle by integrating realistic processes of claim occurrence, claim reporting and claim settlement. This paper investigates the ability of the Chain Ladder (CL) method to adequately forecast outstanding claims within the framework of the 3D model. This allows one to find conditions under which the CL method is adequate for outstanding claim prediction, and others in which it fails. Monte Carlo (MC) simulations are performed, lending support to the theoretic results. The analysis leads to additional suggestions concerning the use of the CL method.  相似文献   

12.
13.
Central banks react even to intraday changes in the exchange rate; however, in most cases, intervention data are available only at a daily frequency. This temporal aggregation makes it difficult to identify the effects of interventions on the exchange rate. We apply the Bayesian Markov‐chain Monte Carlo (MCMC) approach to this endogeneity problem. We use “data augmentation” to obtain intraday intervention amounts and estimate the efficacy of interventions using the augmented data. Applying this new method to Japanese data, we find that an intervention of 1 trillion yen moves the yen/dollar rate by 1.8%, which is more than twice as much as the magnitude reported in previous studies applying ordinary least squares to daily observations. This shows the quantitative importance of the endogeneity problem due to temporal aggregation.  相似文献   

14.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号