首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Based on UK data for major retail credit cards, we build several models of Loss Given Default based on account level data, including Tobit, a decision tree model, a Beta and fractional logit transformation. We find that Ordinary Least Squares models with macroeconomic variables perform best for forecasting Loss Given Default at the account and portfolio levels on independent hold-out data sets. The inclusion of macroeconomic conditions in the model is important, since it provides a means to model Loss Given Default in downturn conditions, as required by Basel II, and enables stress testing. We find that bank interest rates and the unemployment level significantly affect LGD.  相似文献   

2.
Since the introduction of the Basel II Accord, and given its huge implications for credit risk management, the modeling and prediction of the loss given default (LGD) have become increasingly important tasks. Institutions which use their own LGD estimates can build either simpler or more complex methods. Simpler methods are easier to implement and more interpretable, but more complex methods promise higher prediction accuracies. Using a proprietary data set of 1,184 defaulted corporate leases in Germany, this study explores different parametric, semi-parametric and non-parametric approaches that attempt to predict the LGD. By conducting the analyses for different information sets, we study how the prediction accuracy changes depending on the set of information that is available. Furthermore, we use a variable importance measure to identify the input variables that have the greatest effects on the LGD prediction accuracy for each method. In this regard, we provide new insights on the characteristics of leasing LGDs. We find that (1) more sophisticated methods, especially the random forest, lead to remarkable increases in the prediction accuracy; (2) updating information improves the prediction accuracy considerably; and (3) the outstanding exposure at default, an internal rating, asset types and lessor industries turn out to be important drivers of accurate LGD predictions.  相似文献   

3.
陈秀花 《价值工程》2007,26(7):158-161
巴塞尔新资本协议强调内部评级法在风险管理和资本监管中的重要作用。内部评级法的关键在于对违约率及其相关因素的测量,其中违约概率(PD)和违约损失率(LGD)是内部评级法的核心变量。对国际上关于LGD的表现及影响因素的讨论进行了总结与分析,并重点对LGD与PD之间的关系进行了介绍。  相似文献   

4.
This paper uses three classes of univariate time series techniques (ARIMA type models, switching regression models, and state-space/structural time series models) to forecast, on an ex post basis, the downturn in U.S. housing prices starting around 2006. The performance of the techniques is compared within each class and across classes by out-of-sample forecasts for a number of different forecast points prior to and during the downturn. Most forecasting models are able to predict a downturn in future home prices by mid 2006. Some state-space models can predict an impending downturn as early as June 2005. State-space/structural time series models tend to produce the most accurate forecasts, although they are not necessarily the models with the best in-sample fit.  相似文献   

5.
The loss given default (LGD) distribution is known to have a complex structure. Consequently, the parametric approach for its prediction by fitting a density function may suffer a loss of predictive power. To overcome this potential drawback, we use the cumulative probability model (CPM) to predict the LGD distribution. The CPM applies a transformed variable to model the LGD distribution. This transformed variable has a semiparametric structure. It models the predictor effects parametrically. The functional form of the transformation is unspecified. Thus, CPM provides more flexibility and simplicity in modeling the LGD distribution. To implement CPM, we collect a sample of defaulted debts from Moody’s Default and Recovery Database. Given this sample, we use an expanding rolling window approach to investigate the out-of-time performance of CPM and its alternatives. Our results confirm that CPM is better than its alternatives, in the sense of yielding more accurate LGD distribution predictions.  相似文献   

6.
7.
We study market perception of sovereign credit risk in the euro area during the financial crisis. In our analysis we use a parsimonious CDS pricing model to estimate the probability of default (PD) and the loss given default (LGD) as perceived by financial markets. In our empirical results the estimated LGDs perceived by financial markets stay comfortably below 40% in most of the samples. Global financial indicators are positively and strongly correlated with the market perception of sovereign credit risk; whilst macroeconomic and institutional developments were at best only weakly correlated with the market perception of sovereign credit risk.  相似文献   

8.
Previous work on characterising the distribution of forecast errors in time series models by statistics such as the asymptotic mean square error has assumed that observations used in estimating parameters are statistically independent of those used to construct the forecasts themselves. This assumption is quite unrealistic in practical situations and the present paper is intended to tackle the question of how the statistical dependence between the parameter estimates and the final period observations used to generate forecasts affects the sampling distribution of the forecast errors. We concentrate on the first-order autoregression and, for this model, show that the conditional distribution of forecast errors given the final period observation is skewed towards the origin and that this skewness is accentuated in the majority of cases by the statistical dependence between the parameter estimates and the final period observation.  相似文献   

9.
In this paper we test whether the key metals prices of gold and platinum significantly improve inflation forecasts for the South African economy. We also test whether controlling for conditional correlations in a dynamic setup, using bivariate Bayesian-Dynamic Conditional Correlation (B-DCC) models, improves inflation forecasts. To achieve this we compare out-of-sample forecast estimates of the B-DCC model to Random Walk, Autoregressive and Bayesian VAR models. We find that for both the BVAR and BDCC models, improving point forecasts of the Autoregressive model of inflation remains an elusive exercise. This, we argue, is of less importance relative to the more informative density forecasts. For this we find improved forecasts of inflation for the B-DCC models at all forecasting horizons tested. We thus conclude that including metals price series as inputs to inflation models leads to improved density forecasts, while controlling for the dynamic relationship between the included price series and inflation similarly leads to significantly improved density forecasts.  相似文献   

10.
With the implementation of the Basel II regulatory framework, it became increasingly important for financial institutions to develop accurate loss models. This work investigates the loss given default (LGD) of mortgage loans using a large set of recovery data of residential mortgage defaults from a major UK bank. A Probability of Repossession Model and a Haircut Model are developed and then combined to give an expected loss percentage. We find that the Probability of Repossession Model should consist of more than just the commonly used loan-to-value ratio, and that the estimation of LGD benefits from the Haircut Model, which predicts the discount which the sale price of a repossessed property may undergo. This two-stage LGD model is shown to perform better than a single-stage LGD model (which models LGD directly from loan and collateral characteristics), as it achieves a better R2 value and matches the distribution of the observed LGD more accurately.  相似文献   

11.
A probabilistic forecast is the estimated probability with which a future event will occur. One interesting feature of such forecasts is their calibration, or the match between the predicted probabilities and the actual outcome probabilities. Calibration has been evaluated in the past by grouping probability forecasts into discrete categories. We show here that we can do this without discrete groupings; the kernel estimators that we use produce efficiency gains and smooth estimated curves relating the predicted and actual probabilities. We use such estimates to evaluate the empirical evidence on the calibration error in a number of economic applications, including the prediction of recessions and inflation, using both forecasts made and stored in real time and pseudo-forecasts made using the data vintage available at the forecast date. The outcomes are evaluated using both first-release outcome measures and subsequent revised data. We find substantial evidence of incorrect calibration in professional forecasts of recessions and inflation from the SPF, as well as in real-time inflation forecasts from a variety of output gap models.  相似文献   

12.
The accuracy of population forecasts depends in part upon the method chosen for forecasting the vital rates of fertility, mortality, and migration. Methods for handling the stochastic propagation of error calculations in demographic forecasting are hard to do precisely. This paper discusses this obstacle in stochastic cohort-component population forecasts. The uncertainty of forecasts is due to uncertain estimates of the jump-off population and to errors in the forecasts of the vital rates. Empirically based of each source are presented and propagated through a simplified analytical model of population growth that allows assessment of the role of each component in the total error. Numerical estimates based on the errors of an actual vector ARIMA forecast of the US female population. These results broadly agree with those of the analytical model. This work especially uncertainty in the fertility forecasts to be so much higher than that in the other sources that the latter can be ignored in the propagation of error calculations for those cohorts that are born after the jump-off year of the forecast. A methodology is therefore presented which far simplifies the propagation of error calculations. It is noted, however, that the uncertainty of the jump-off population, migration, and mortality in the propagation of error for those alive at the jump-off time of the forecast must still be considered.  相似文献   

13.
14.
周颖  宁学敏 《价值工程》2009,28(8):147-151
巴塞尔新资本协议提倡量化商业银行风险,并强调了商业银行内部评级管理的重要性,基于PD和LGD的内部评级制度管理已成为商业银行提高内部评级管理水平的重要途径。归纳了PD与LGD的度量方法以及基于PD和LGD的内部评级管理方法,讨论了我国商业银行在内部评级管理方面存在的问题,并对我国商业银行建立内被评级系统提出了建议。  相似文献   

15.
In practice, inventory decisions depend heavily on demand forecasts, but the literature typically assumes that demand distributions are known. This means that estimates are substituted directly for the unknown parameters, leading to insufficient safety stocks, stock-outs, low service, and high costs. We propose a framework for addressing this estimation uncertainty that is applicable to any inventory model, demand distribution, and parameter estimator. The estimation errors are modeled and a predictive lead time demand distribution obtained, which is then substituted into the inventory model. We illustrate this framework for several different demand models. When the estimates are based on ten observations, the relative savings are typically between 10% and 30% for mean-stationary demand. However, the savings are larger when the estimates are based on fewer observations, when backorders are costlier, or when the lead time is longer. In the presence of a trend, the savings are between 50% and 80% for several scenarios.  相似文献   

16.
Restricted maximum likelihood (REML) estimation has recently been shown to provide less biased estimates in autoregressive series. A simple weighted least squares approximate REML procedure has been developed that is particularly useful for vector autoregressive processes. Here, we compare the forecasts of such processes using both the standard ordinary least squares (OLS) estimates and the new approximate REML estimates. Forecasts based on the approximate REML estimates are found to provide a significant improvement over those obtained using the standard OLS estimates.  相似文献   

17.
This paper presents a methodology for estimating an index of technological change using firm-level data in a stochastic frontier production function model that takes into account time-varying technical inefficiency. In contrast to the Solow divisia index approach, econometric estimation of the index with panel data allows the researcher to separate technical progress from the stochastic measurement error. Applying the econometric methodology to a panel of 908 publicly-traded U.S. firms from the COMPUSTAT database, we find evidence of a significant downturn in general technological change for the period, 1970– 1989, whereas the divisia index methodology applied to the same data shows stagnation. When the sample is divided into Manufacturing, Services, and Miscellaneous categories we find that estimates of technological change for the three groups display markedly different stochastic behavior and that the Services group is the source of the downturn.  相似文献   

18.
The introduction of the Basel II Accord has had a huge impact on financial institutions, allowing them to build credit risk models for three key risk parameters: PD (probability of default), LGD (loss given default) and EAD (exposure at default). Until recently, credit risk research has focused largely on the estimation and validation of the PD parameter, and much less on LGD modeling. In this first large-scale LGD benchmarking study, various regression techniques for modeling and predicting LGD are investigated. These include one-stage models, such as those built by ordinary least squares regression, beta regression, robust regression, ridge regression, regression splines, neural networks, support vector machines and regression trees, as well as two-stage models which combine multiple techniques. A total of 24 techniques are compared using six real-life loss datasets from major international banks. It is found that much of the variance in LGD remains unexplained, as the average prediction performance of the models in terms of R2 ranges from 4% to 43%. Nonetheless, there is a clear trend that non-linear techniques, and in particular support vector machines and neural networks, perform significantly better than more traditional linear techniques. Also, two-stage models built by a combination of linear and non-linear techniques are shown to have a similarly good predictive power, with the added advantage of having a comprehensible linear model component.  相似文献   

19.
Traditional econometric models of economic contractions typically perform poorly in forecasting exercises. This criticism is also frequently levelled at professional forecast probabilities of contractions. This paper addresses the problem of incorporating the entire distribution of professional forecasts into an econometric model for forecasting contractions and expansions. A new augmented probit approach is proposed, involving the transformation of the distribution of professional forecasts into a ‘professional forecast’ prior for the economic data underlying the probit model. Since the object of interest is the relationship between the distribution of professional forecasts and the probit model’s economic-data dependent parameters, the solution avoids criticisms levelled at the accuracy of professional forecast based point estimates of contractions. An application to US real GDP data shows that the model yields significant forecast improvements relative to alternative approaches.  相似文献   

20.
本文通过构建随机前沿引力模型估计了中国的前沿出口水平以及出口潜力,并将影响中国出口的因素分为自然决定因素和人为决定因素,分别估计了这些因素对于出口的影响程度,由此确认了中国出口的需求拉动特征。同时,中国出口处于低效率状态,说明其所受的人为贸易阻力较大,随着世界贸易环境的改善,中国仍然具有较大的出口潜力。在当前外部需求减弱的情况下,通过短期政策促进出口企业产能具有重要的危机应对意义。从长期而言,出于维护国家经济安全的考虑,注意开发国内市场,启动内部消费,适当降低中国经济增长的对外依赖程度仍然是经济政策的重要指向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号