首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Data insufficiency and reporting threshold are two main issues in operational risk modelling. When these conditions are present, maximum likelihood estimation (MLE) may produce very poor parameter estimates. In this study, we first investigate four methods to estimate the parameters of truncated distributions for small samples—MLE, expectation-maximization algorithm, penalized likelihood estimators, and Bayesian methods. Without any proper prior information, Jeffreys’ prior for truncated distributions is used. Based on a simulation study for the log-normal distribution, we find that the Bayesian method gives much more credible and reliable estimates than the MLE method. Finally, an application to the operational loss severity estimation using real data is conducted using the truncated log-normal and log-gamma distributions. With the Bayesian method, the loss distribution parameters and value-at-risk measure for every cell with loss data can be estimated separately for internal and external data. Moreover, confidence intervals for the Bayesian estimates are obtained via a bootstrap method.  相似文献   

2.
Operational risk is an increasingly important area of risk management. Scenarios are an important modelling tool in operational risk management as alternative viable methods may not exist. This can be due to challenging modelling, data and implementation issues, and other methods fail to take into account expert information. The use of scenarios has been recommended by regulators; however, scenarios can be unreliable, unrealistic and fail to take into account quantitative data. These problems have also been identified by regulators such as Basel, and presently little literature exists on addressing the problem of generating scenarios for operational risk. In this paper we propose a method for generating operational risk scenarios. We employ the method of cluster analysis to generate scenarios that enable one to combine expert opinion scenarios with quantitative operational risk data. We show that this scenario generation method leads to significantly improved scenarios and significant advantages for operational risk applications. In particular for operational risk modelling, our method leads to resolving the key problem of combining two sources of information without eliminating the information content gained from expert opinions, tractable computational implementation for operational risk modelling, improved stress testing, what‐if analyses and the ability to apply our method to a wide range of quantitative operational risk data (including multivariate distributions). We conduct numerical experiments on our method to demonstrate and validate its performance and compare it against scenarios generated from statistical property matching for comparison. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper we demonstrate that robust estimators improve the reliability of estimates of beta coefficients on small, thinly traded stock markets. We outline several different types of robust and bounded influence regression estimators and assess them using a jackknife methodology on data from the Johannesburg Stock Exchange. The empirical evidence confirms the hypothesis that robust estimators are more efficient than least squares estimators and indicates that least squares estimators may over-estimate systematic risk in some cases.  相似文献   

4.
Estimating Beta     
This paper presents evidence that Ordinary Least Squares estimators of beta coefficients of major firms and portfolios are highly sensitive to observations of extremes in market index returns. This sensitivity is rooted in the inconsistency of the quadratic loss function in financial theory. By introducing considerations of risk aversion into the estimation procedure using alternative estimators derived from Gini measures of variability one can overcome this lack of robustness and improve the reliability of the results.  相似文献   

5.
This paper applies the extreme-value (EV) generalised pareto distribution to the extreme tails of the return distributions for the S&P500, FT100, DAX, Hang Seng, and Nikkei225 futures contracts. It then uses tail estimators from these contracts to estimate spectral risk measures, which are coherent risk measures that reflect a user’s risk-aversion function. It compares these to VaR and expected shortfall (ES) risk measures, and compares the precision of their estimators. It also discusses the usefulness of these risk measures in the context of clearinghouses setting initial margin requirements, and compares these to the SPAN measures typically used.  相似文献   

6.
Motivated by the modelling of liquidity risk in fund management in a dynamic setting, we propose and investigate a class of time series models with generalized Pareto marginals: the autoregressive generalized Pareto process (ARGP), a modified ARGP and a thresholded ARGP. These models are able to capture key data features apparent in fund liquidity data and reflect the underlying phenomena via easily interpreted, low-dimensional model parameters. We establish stationarity and ergodicity, provide a link to the class of shot-noise processes, and determine the associated interarrival distributions for exceedances. Moreover, we provide estimators for all relevant model parameters and establish consistency and asymptotic normality for all estimators (except the threshold parameter, which is to be estimated in advance). Finally, we illustrate our approach using real-world fund redemption data, and we discuss the goodness-of-fit of the estimated models.  相似文献   

7.
We document a robust negative relation between operational risk exposure and bank capital levels for a sample of large U.S. banks under the Basel I Capital Accords. The results are consistent with the notion that capital-constrained banks increased operational risk exposure at the time when Basel I regulations did not require an explicit capital charge for operational risk. More broadly, our results show new channel by which financial regulations incentivize banks to shift their risk taking to less regulated risk areas. We focus on the case of operational risk because it went from a largely unregulated risk type to a major risk that accounts for about 25% of large U.S. banks’ risk-weighted assets.  相似文献   

8.
With the regulatory requirements for risk management, Value at Risk (VaR) has become an essential tool in determining capital reserves to protect the risk induced by adverse market movements. The fact that VaR is not coherent has motivated the industry to explore alternative risk measures such as expected shortfall. The first objective of this paper is to propose statistical methods for estimating multiple-period expected shortfall under GARCH models. In addition to the expected shortfall, we investigate a new tool called median shortfall to measure risk. The second objective of this paper is to develop backtesting methods for assessing the performance of expected shortfall and median shortfall estimators from statistical and financial perspectives. By applying our expected shortfall estimators and other existing approaches to seven international markets, we demonstrate the superiority of our methods with respect to statistical and practical evaluations. Our expected shortfall estimators likely provide an unbiased reference for setting the minimum capital required for safeguarding against expected loss.  相似文献   

9.
This paper investigates the reliability of the two-pass (TP) estimators of factor risk prices when betas (multifactor loadings) have high levels of cross-sectional correlation (multicollinearity) and/or when some of them have small cross-sectional variations (near-invariance). Our simulation results show the following. First, the TP estimators can have biases larger than 100% of true risk prices when data are generated by the betas with high levels of multicollinearity and invariance that can be observed from actual data. Second, the t-tests for hypotheses related to risk prices and pricing intercepts have only limited power. The levels of multicollinearity and invariance of betas can vary depending on the assets and sample periods used in estimation. Thus, we propose use of two pre-diagnostic statistics to measure these levels. Many previous studies have investigated the finite-sample properties of the TP estimators using the data generated with the estimated betas from actual data. Our results indicate that simulation outcomes can lead to quite different conclusions, depending on the levels of multicollinearity and invariance of the betas used to generate the data.  相似文献   

10.
Financial time series have two features which, in many cases, prevent the use of conventional estimators of volatilities and correlations: leptokurtotic distributions and contamination of data with outliers. Other techniques are required to achieve stable and accurate results. In this paper, we review robust estimators for volatilities and correlations and identify those best suited for use in risk management. The selection criteria were that the estimator should be stable to both fractionally small departures for all data points (fat tails), and to fractionally large departures for a small number of data points (outliers). Since risk management typically deals with thousands of time series at once, another major requirement was the independence of the approach of any manual correction or data pre-processing. We recommend using volatility t-estimators, for which we derived the estimation error formula for the case when the exact shape of the data distribution is unknown. A convenient robust estimator for correlations is Kendall's tau, whose drawback is that it does not guarantee the positivity of the correlation matrix. We chose to use geometric optimization that overcomes this problem by finding the closest correlation matrix to a given matrix in terms of the Hadamard norm. We propose the weights for the norm and demonstrate the efficiency of the algorithm on large-scale problems.  相似文献   

11.
即使是损失很小的操作风险事件,也会引发巨大的声誉风险并诱发系统性金融风险。探究我国商业银行操作风险事件引发的声誉风险效应及影响因素,是本文研究的核心问题。本文以2001年1月至2019年3月中国上市银行公开披露的201件操作风险事件为样本,运用事件法进行分析。结果显示,由于国家声誉的风险分担机制,相较于股份制银行,国有大型银行操作风险事件引发的声誉风险更低且恢复较快;而涉及高管人员的操作风险事件会造成更大的商业银行声誉损失。本文进一步对声誉风险的影响因素进行回归分析,结果表明虽然国有大型银行比中小银行更易受到声誉损害,但国家声誉对国有大型银行操作风险事件的声誉风险效应具有明显的抑制作用,且这种风险抑制作用会与商业银行有形资本形成"替代效应"。本文的政策启示在于,以国有大型银行为主体的金融体系中,应重点关注中小银行和高管人员在监管操作风险事件引发的声誉风险中的重要意义,通过完善落实有形资本的监管制度提升对声誉风险的抑制作用。  相似文献   

12.
In the probabilistic risk aversion approach, risks are presumed as random variables with known probability distributions. However, in some practical cases, for example, due to the absence of historical data, the inherent uncertain characteristic of risks or different subject judgements from the decision-makers, risks may be hard or not appropriate to be estimated with probability distributions. Therefore, the traditional probabilistic risk aversion theory is ineffective. Thus, in order to deal with these cases, we suggest measuring these kinds of risks as fuzzy variables, and accordingly to present an alternative risk aversion approach by employing credibility theory. In the present paper, first, the definition of credibilistic risk premium proposed by Georgescu and Kinnunen [Fuzzy Inf. Eng., 2013, 5, 399–416] is revised by taking the initial wealth into consideration, and then a general method to compute the credibilistic risk premium is provided. Secondly, regarding the risks represented with the commonly used LR fuzzy intervals, a simple calculation formula of the local credibilistic risk premium is put forward. Finally, in a global sense, several equivalent propositions for comparative risk aversion under the credibility measurement are provided. Illustrated examples are presented to show the applicability of the theoretical findings.  相似文献   

13.
操作风险量化是为降低和控制风险服务的。运用贝叶斯网络分析量化操作风险并改进控制,对提升商业银行的操作风险管理水平是一条有效途径。论文以我国银行的一个典型业务—远期结售汇为例,研究了实践中建立和运用贝叶斯网络来估计操作风险发生频率的具体方法。以流程分析和映射为基础,阐述了风险映射与节点识别、网络结构建立、网络节点描述、节点赋值的实施步骤,并说明了利用贝叶斯网进行因果推理和诊断推理的方法。  相似文献   

14.
The quantification of operational risk has become an important issue as a result of the new capital charges required by the Basel Capital Accord (Basel II) to cover the potential losses of this type of risk. In this paper, we investigate second-order approximation of operational risk quantified with spectral risk measures (OpSRMs) within the theory of second-order regular variation (2RV) and second-order subexponentiality. The result shows that asymptotically two cases (the fast convergence case and the slow convergence) arise depending on the range of the second-order parameter. We also show that the second-order approximation under 2RV is asymptotically equivalent to the slow convergence case. A number of Monte Carlo simulations for a range of empirically relevant frequency and severity distributions are employed to illustrate the performance of our second-order results. The simulation results indicate that our second-order approximations tend to reduce the estimation errors to a great degree, especially for the fast convergence case, and are able to capture the sub-extremal behavior of OpSRMs better than the first-order approximation. Our asymptotic results have implications for the regulation of financial institutions, and may provide further insights into the measurement and management of operational risk.  相似文献   

15.
操作风险管理体制:框架、模式与建构   总被引:2,自引:0,他引:2  
从理论上讲,一个完整全面的操作风险管理框架至少应该在理论上包括风险管理战略、风险管理流程、基础设施和风险管理环境等四个组成部分。从实践上看,国际上形成了三种比较典型的各具特色的有关操作风险管理体制模式,对于中国各商业银行操作风险管理体制的建构,具有借鉴价值。  相似文献   

16.
商业银行操作风险的特征、成因及控制   总被引:2,自引:0,他引:2  
商业银行操作风险的主要形式是欺诈,具有反控制、容易被能人所做、营销所为以及指标所驱的外包装所掩盖等特征;商业银行操作风险的成因包括内控的过程管理流于形式,对操作风险的认识不足,传统的管理观念和习惯影响组织执行力的发挥,对操作风险的管控缺乏历史数据支持和经验积累,对风险管理控制工作的职责认识存在偏差,业务创新加大操作风险概率等。商业银行为了改进操作风险管控体系必须加强内部控制制度建设,提高内控部门在防范银行操作风险中的监督作用,完善银行的公司治理结构,构建内控发展系统的整体框架。  相似文献   

17.
In the context of high-risk industries, risk assessment takes place not only through standardized methods for risk analysis, but is frequently negotiated and discussed as an integral part of operational decision-making. This is not least the case in the context of operational planning. Frequent changes in operations require ongoing assessment of risk as tasks are rescheduled and resources reallocated. The current study explores how professionals account for the presence or absence of risk in a setting in which risk analysis is not the primary objective. With data from the offshore petroleum industry, the rhetorical aspects of risk assessments are examined. A series of interprofessional planning meetings were video recorded, transcribed, and analyzed using a rhetorical discourse analytic framework. The data are analyzed at a micro-interactional level in order to study how accounts of risk are presented and negotiated in this particular setting. The meaning and consequences of operational plan changes, and their implications for safety, are seen as negotiated discursively through interprofessional meeting talk. The analysis shows that accounts of risk are characterized by shifting rhetorical strategies that can be heard to echo established risk discourses often referred to as ‘technico-scientific’ and ‘contextualized’ conceptions of risk. Rhetorical devices are used interchangeably and strategically by the participants as they account for risk from their respective institutional positions and their specific areas of expertise and responsibility. The accounts are found to be increasingly persuasive and rhetorical in style as disagreements over risk and prioritizations surface. Accounts of risk, then, are not simply objective presentations of probability and consequence, but rather powerful tools for achieving specific professional outcomes. The study contributes to the understanding of risk assessment at its most concrete and practical level; as it takes place through professional interaction in an operational setting.  相似文献   

18.
Analytical procedures are evaluations of account and transaction flow information made by a study of plausible relationships between both accounting and non‐accounting data. This study investigates the performance of Tweedie distributions (which have Gaussian distributions as members) in improving fit of zero‐inflated, non‐negative, kurtotic and multimodal analytical review data. The study found that account valuations are more informative than marginal data in analytical review, that mixture Poisson–Gamma distributions offer better fit than Gaussian distributions, even under assumptions of central limit theorem convergence, and that mixture Poisson–Gamma distributions provide better predictions of future account and transaction volumes and values. Model performance improvement with price versus returns data in this empirical study was substantial: from less than one‐quarter of variance, to almost two‐thirds. Tweedie generalized linear model risk assessments were found to be a magnitude smaller than traditional risk assessments, lending support to market inefficiency and increased risk from idiosyncratic factors. An example with several differing distributions shows that use of mixture distributions instead of point estimation can reduce sample size while retaining the power of the audit tests. The results of this study are increasingly important as accounting datasets are growing exponentially larger over time, requiring well‐defined roles for models, algorithms, data and narrative which can only be achieved with statistical protocols and algorithmic languages.  相似文献   

19.
Modeling Operational Risk With Bayesian Networks   总被引:2,自引:0,他引:2  
Bayesian networks is an emerging tool for a wide range of risk management applications, one of which is the modeling of operational risk. This comes at a time when changes in the supervision of financial institutions have resulted in increased scrutiny on the risk management of banks and insurance companies, thus giving the industry an impetus to measure and manage operational risk. The more established methods for risk quantification are linear models such as time series models, econometric models, empirical actuarial models, and extreme value theory. Due to data limitations and complex interaction between operational risk variables, various nonlinear methods have been proposed, one of which is the focus of this article: Bayesian networks. Using an idealized example of a fictitious on line business, we construct a Bayesian network that models various risk factors and their combination into an overall loss distribution. Using this model, we show how established Bayesian network methodology can be applied to: (1) form posterior marginal distributions of variables based on evidence, (2) simulate scenarios, (3) update the parameters of the model using data, and (4) quantify in real‐time how well the model predictions compare to actual data. A specific example of Bayesian networks application to operational risk in an insurance setting is then suggested.  相似文献   

20.
ABSTRACT

Modeling multivariate time-series aggregate losses is an important actuarial topic that is very challenging due to the fact that losses can be serially dependent with heterogeneous dependence structures across loss types and business lines. In this paper, we investigate a flexible class of multivariate Cox Hidden Markov Models for the joint arrival process of loss events. Some of the nice properties possessed by this class of models, such as closed-form expressions, thinning properties and model versatility are discussed in details. We provide the expectation-maximization (EM) algorithm for efficient model calibration. Applying the proposed model to an operational risk dataset, we demonstrate that the model offers sufficient flexibility to capture most characteristics of the observed loss frequencies. By modeling the log-transformed loss severities through mixture of Erlang distributions, we can model the aggregate losses. Finally, out-of-sample testing shows that the proposed model is adequate to predict short-term future operational risk losses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号