首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We study market perception of sovereign credit risk in the euro area during the financial crisis. In our analysis we use a parsimonious CDS pricing model to estimate the probability of default (PD) and the loss given default (LGD) as perceived by financial markets. In our empirical results the estimated LGDs perceived by financial markets stay comfortably below 40% in most of the samples. Global financial indicators are positively and strongly correlated with the market perception of sovereign credit risk; whilst macroeconomic and institutional developments were at best only weakly correlated with the market perception of sovereign credit risk.  相似文献   

2.
Estimating the recovery rate and recovery amount has become important in consumer credit due to the new Basel Accord regulation and the increase in the number of defaulters as a result of the recession. We compare linear regression and survival analysis models for modelling recovery rates and recovery amounts, in order to predict the loss given default (LGD) for unsecured consumer loans or credit cards. We also look at the advantages and disadvantages of using single and mixture distribution models for estimating these quantities.  相似文献   

3.
陈秀花 《价值工程》2007,26(7):158-161
巴塞尔新资本协议强调内部评级法在风险管理和资本监管中的重要作用。内部评级法的关键在于对违约率及其相关因素的测量,其中违约概率(PD)和违约损失率(LGD)是内部评级法的核心变量。对国际上关于LGD的表现及影响因素的讨论进行了总结与分析,并重点对LGD与PD之间的关系进行了介绍。  相似文献   

4.
Since the introduction of the Basel II Accord, and given its huge implications for credit risk management, the modeling and prediction of the loss given default (LGD) have become increasingly important tasks. Institutions which use their own LGD estimates can build either simpler or more complex methods. Simpler methods are easier to implement and more interpretable, but more complex methods promise higher prediction accuracies. Using a proprietary data set of 1,184 defaulted corporate leases in Germany, this study explores different parametric, semi-parametric and non-parametric approaches that attempt to predict the LGD. By conducting the analyses for different information sets, we study how the prediction accuracy changes depending on the set of information that is available. Furthermore, we use a variable importance measure to identify the input variables that have the greatest effects on the LGD prediction accuracy for each method. In this regard, we provide new insights on the characteristics of leasing LGDs. We find that (1) more sophisticated methods, especially the random forest, lead to remarkable increases in the prediction accuracy; (2) updating information improves the prediction accuracy considerably; and (3) the outstanding exposure at default, an internal rating, asset types and lessor industries turn out to be important drivers of accurate LGD predictions.  相似文献   

5.
With the implementation of the Basel II regulatory framework, it became increasingly important for financial institutions to develop accurate loss models. This work investigates the loss given default (LGD) of mortgage loans using a large set of recovery data of residential mortgage defaults from a major UK bank. A Probability of Repossession Model and a Haircut Model are developed and then combined to give an expected loss percentage. We find that the Probability of Repossession Model should consist of more than just the commonly used loan-to-value ratio, and that the estimation of LGD benefits from the Haircut Model, which predicts the discount which the sale price of a repossessed property may undergo. This two-stage LGD model is shown to perform better than a single-stage LGD model (which models LGD directly from loan and collateral characteristics), as it achieves a better R2 value and matches the distribution of the observed LGD more accurately.  相似文献   

6.
Daily and weekly seasonalities are always taken into account in day-ahead electricity price forecasting, but the long-term seasonal component has long been believed to add unnecessary complexity, and hence, most studies have ignored it. The recent introduction of the Seasonal Component AutoRegressive (SCAR) modeling framework has changed this viewpoint. However, this framework is based on linear models estimated using ordinary least squares. This paper shows that considering non-linear autoregressive (NARX) neural network-type models with the same inputs as the corresponding SCAR-type models can lead to yet better performances. While individual Seasonal Component Artificial Neural Network (SCANN) models are generally worse than the corresponding SCAR-type structures, we provide empirical evidence that committee machines of SCANN networks can outperform the latter significantly.  相似文献   

7.
This study evaluates a wide range of machine learning techniques such as deep learning, boosting, and support vector regression to predict the collection rate of more than 65,000 defaulted consumer credits from the telecommunications sector that were bought by a German third-party company. Weighted performance measures were defined based on the value of exposure at default for comparing collection rate models. The approach proposed in this paper is useful for a third-party company in managing the risk of a portfolio of defaulted credit that it purchases. The main finding is that one of the machine learning models we investigate, the deep learning model, performs significantly better out-of-sample than all other methods that can be used by an acquirer of defaulted credits based on weighted-performance measures. By using unweighted performance measures, deep learning and boosting perform similarly. Moreover, we find that using a training set with a larger proportion of the dataset does not improve prediction accuracy significantly when deep learning is used. The general conclusion is that deep learning is a potentially performance-enhancing tool for credit risk management.  相似文献   

8.
9.
A proper credit scoring technique is vital to the long-term success of all kinds of financial institutions, including peer-to-peer (P2P) lending platforms. The main contribution of our paper is the robust ranking of 10 different classification techniques based on a real-world P2P lending data set. Our data set comes from the Lending Club covering the 2009–2013 period, which contains 212,252 records and 23 different variables. Unlike other researchers, we use a data sample which contains the final loan resolution for all loans. We built our research using a 5-fold cross-validation method and 6 different classification performance measurements. Our results show that logistic regression, artificial neural networks, and linear discriminant analysis are the three best algorithms based on the Lending Club data. Conversely, we identify k-nearest neighbors and classification and regression tree as the two worst classification methods.  相似文献   

10.
周颖  宁学敏 《价值工程》2009,28(8):147-151
巴塞尔新资本协议提倡量化商业银行风险,并强调了商业银行内部评级管理的重要性,基于PD和LGD的内部评级制度管理已成为商业银行提高内部评级管理水平的重要途径。归纳了PD与LGD的度量方法以及基于PD和LGD的内部评级管理方法,讨论了我国商业银行在内部评级管理方面存在的问题,并对我国商业银行建立内被评级系统提出了建议。  相似文献   

11.
银行不良贷款违约损失率结构特征研究   总被引:1,自引:0,他引:1  
本文对中国银行业面临的信用风险违约损失率(LGD)展开研究,以温州某商业银行不良贷款数据为样本,通过描述性统计,对LGD的结构特征:信用风险暴露规模特征、期限特征、地域特征以及担保特征等进行了详细分析。结果表明LGD与风险暴露规模呈负相关,LGD与贷款期限呈正相关,不同地域、不同担保方式的违约贷款其LGD差异性显著。以上这些结论可为商业银行信用风险管理、信贷投放导向以及信用风险监管提供现实帮助。  相似文献   

12.
The Basel II and III Accords propose estimating the credit conversion factor (CCF) to model exposure at default (EAD) for credit cards and other forms of revolving credit. Alternatively, recent work has suggested it may be beneficial to predict the EAD directly, i.e.modelling the balance as a function of a series of risk drivers. In this paper, we propose a novel approach combining two ideas proposed in the literature and test its effectiveness using a large dataset of credit card defaults not previously used in the EAD literature. We predict EAD by fitting a regression model using the generalised additive model for location, scale, and shape (GAMLSS) framework. We conjecture that the EAD level and risk drivers of its mean and dispersion parameters could substantially differ between the debtors who hit the credit limit (i.e.“maxed out” their cards) prior to default and those who did not, and thus implement a mixture model conditioning on these two respective scenarios. In addition to identifying the most significant explanatory variables for each model component, our analysis suggests that predictive accuracy is improved, both by using GAMLSS (and its ability to incorporate non-linear effects) as well as by introducing the mixture component.  相似文献   

13.
The loss given default (LGD) distribution is known to have a complex structure. Consequently, the parametric approach for its prediction by fitting a density function may suffer a loss of predictive power. To overcome this potential drawback, we use the cumulative probability model (CPM) to predict the LGD distribution. The CPM applies a transformed variable to model the LGD distribution. This transformed variable has a semiparametric structure. It models the predictor effects parametrically. The functional form of the transformation is unspecified. Thus, CPM provides more flexibility and simplicity in modeling the LGD distribution. To implement CPM, we collect a sample of defaulted debts from Moody’s Default and Recovery Database. Given this sample, we use an expanding rolling window approach to investigate the out-of-time performance of CPM and its alternatives. Our results confirm that CPM is better than its alternatives, in the sense of yielding more accurate LGD distribution predictions.  相似文献   

14.
Over the last four decades, a large number of structural models have been developed to estimate and price credit risk. The focus of the paper is on a neglected issue pertaining to fundamental shifts in the structural parameters governing default. We propose formal quality control procedures that allow risk managers to monitor fundamental shifts in the structural parameters of credit risk models. The procedures are sequential — hence apply in real time. The basic ingredients are the key processes used in credit risk analysis, such as most prominently the Merton distance to default process as well as financial returns. Moreover, while we propose different monitoring processes, we also show that one particular process is optimal in terms of minimal detection time of a break in the drift process and relates to the Radon–Nikodym derivative for a change of measure.  相似文献   

15.
巴塞尔新资本协议在鼓励银行采用内部评级法评估信用风险以提取资本准备的同时也强化了各国监管机构对内部评级模型绩效检验与审查的要求.CreditMetrics和CreditRisk+是银行业信用风险评估的基准模型.从建模的数学方法看,CreditRisk+是基于违约的判断,而CreditMetrics则是根据等级变化评价.利用江苏省银监局的相关统计数据对信用风险评估模型进行参数特性审查与绩效检验,结果显示这两类常用模型都可以在江苏的商业银行经营实践中稳定地实现根据信贷组合的实际风险状况进行内部资本配置这一目标.  相似文献   

16.
基于信息不完全的信用风险定价模型与传统的结构化模型和约化模型的最大区别在于它将信息不完全这一前提引入了以信息完全为前提的结构化模型,同时它又考虑了约化模型中强度的优点,引入短期信用风险的度量,成为当前最切合现实的信用风险定价模型。本文认为,应用基于信息不完全的信用风险定价模型来测度信用风险,将具有十分重要的现实意义。  相似文献   

17.
Since the advent of the horseshoe priors for regularisation, global–local shrinkage methods have proved to be a fertile ground for the development of Bayesian methodology in machine learning, specifically for high-dimensional regression and classification problems. They have achieved remarkable success in computation and enjoy strong theoretical support. Most of the existing literature has focused on the linear Gaussian case; for which systematic surveys are available. The purpose of the current article is to demonstrate that the horseshoe regularisation is useful far more broadly, by reviewing both methodological and computational developments in complex models that are more relevant to machine learning applications. Specifically, we focus on methodological challenges in horseshoe regularisation in non-linear and non-Gaussian models, multivariate models and deep neural networks. We also outline the recent computational developments in horseshoe shrinkage for complex models along with a list of available software implementations that allows one to venture out beyond the comfort zone of the canonical linear regression problems.  相似文献   

18.
Recent electricity price forecasting studies have shown that decomposing a series of spot prices into a long-term trend-seasonal and a stochastic component, modeling them independently and then combining their forecasts, can yield more accurate point predictions than an approach in which the same regression or neural network model is calibrated to the prices themselves. Here, considering two novel extensions of this concept to probabilistic forecasting, we find that (i) efficiently calibrated non-linear autoregressive with exogenous variables (NARX) networks can outperform their autoregressive counterparts, even without combining forecasts from many runs, and that (ii) in terms of accuracy it is better to construct probabilistic forecasts directly from point predictions. However, if speed is a critical issue, running quantile regression on combined point forecasts (i.e., committee machines) may be an option worth considering. Finally, we confirm an earlier observation that averaging probabilities outperforms averaging quantiles when combining predictive distributions in electricity price forecasting.  相似文献   

19.
Recently, the literature has measured economic policy uncertainty using news references, resulting in the frequently-mentioned ‘Economic Policy Uncertainty index’ (EPU). In the original setup, a news article is assumed to address policy uncertainty if it contains certain predefined keywords. We argue that the original setup is prone to measurement error, and propose an alternative methodology using text mining techniques. We compare the original method to modality annotation and support vector machines (SVM) classification in order to create an EPU index for Belgium. Validation on an out-of-sample test set speaks in favour of using an SVM classification model for constructing a news-based policy uncertainty indicator. The indicators are then used to forecast 10 macroeconomic and financial variables. The original method of measuring EPU does not have predictive power for any of these 10 variables. The SVM indicator has a higher predictive power and, notably, changes in the level of policy uncertainty during tumultuous periods of high uncertainty and risk can predict changes in the sovereign bond yield and spread, the credit default swap spread, and consumer confidence.  相似文献   

20.
We develop a dynamic model to illustrate the credit risk contagion mechanism caused by interaction between firms. Specifically, we formulate the sources of risk into idiosyncratic risk and contagion risk, and introduce recovery ability to model the scenario of a firm changing from default into normal status. Our result shows that there always exists a steady state in a network under some trivial conditions. For quasi-regular networks and bipartite networks, the expected aggregate loss remains unchanged as long as the product of the contagion probability and the partner number is fixed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号