首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The New Basel Accord allows internationally active banking organizations to calculate their credit risk capital requirements using an internal ratings based approach, subject to supervisory review. One of the modeling components is the loss-given default (LGD): it represents the credit loss for a bank when extreme events occur that influence the obligor ability to repay his debts to the bank. Among researchers and practitioners the use of statistical models such as linear regression, Tobit or decision trees is quite common in order to compute LGDs as a forecasting of historical losses. However, these statistical techniques do not seem to provide robust estimation and show low performance. These results could be driven by some factors that make differences in LGD, such as the presence and quality of collateral, timing of the business cycle, workout process management and M&A activity among banks. This paper evaluates an alternative method of modeling LGD using a technique based on advanced credibility theory typically used in actuarial modeling. This technique provides a statistical component to the credit and workout experts’ opinion embedded in the collateral and workout management process and improve the predictive power of forecasting. The model has been applied to an Italian Bank Retail portfolio represented by Overdrafts; the application of credibility theory provides a higher predictive power of LGD estimation and an out-of-time sample backtesting has shown a stable accuracy of estimates with respect to the traditional LGD model.  相似文献   

2.
Forecasting credit default risk has been an important research field for several decades. Traditionally, logistic regression has been widely recognized as a solution because of its accuracy and interpretability. Although complex machine learning models may improve accuracy over simple logistic regressions, their interpretability has prevented their use in credit risk assessment. We introduce a neural network with a selective option to increase interpretability by distinguishing whether linear models can explain the dataset. Our methods are tested on two datasets: 25,000 samples from the Taiwan payment system collected in October 2005 and 250,000 samples from the 2011 Kaggle competition. We find that, for most of samples, logistic regression will be sufficient, with reasonable accuracy; meanwhile, for some specific data portions, a shallow neural network model leads to much better accuracy without significantly sacrificing interpretability.  相似文献   

3.
Previous research on credit scoring that used statistical and intelligent methods was mostly focused on commercial and consumer lending. The main purpose of this paper is to extract important features for credit scoring in small‐business lending on a dataset with specific transitional economic conditions using a relatively small dataset. To do this, we compare the accuracy of the best models extracted by different methodologies, such as logistic regression, neural networks (NNs), and CART decision trees. Four different NN algorithms are tested, including backpropagation, radial basis function network, probabilistic and learning vector quantization, by using the forward nonlinear variable selection strategy. Although the test of differences in proportion and McNemar's test do not show a statistically significant difference in the models tested, the probabilistic NN model produces the highest hit rate and the lowest type I error. According to the measures of association, the best NN model also shows the highest degree of association with the data, and it yields the lowest total relative cost of misclassification for all scenarios examined. The best model extracts a set of important features for small‐business credit scoring for the observed sample, emphasizing credit programme characteristics, as well as entrepreneur's personal and business characteristics as the most important ones. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
We analyse whether the use of neural networks can improve ‘traditional’ volatility forecasts from time-series models, as well as implied volatilities obtained from options on futures on the Spanish stock market index, the IBEX-35. One of our main contributions is to explore the predictive ability of neural networks that incorporate both implied volatility information and historical time-series information. Our results show that the general regression neural network forecasts improve the information content of implied volatilities and enhance the predictive ability of the models. Our analysis is also consistent with the results from prior research studies showing that implied volatility is an unbiased forecast of future volatility and that time-series models have lower explanatory power than implied volatility. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
This paper investigates relationships between the spread component costs (adverse selection, order processing and inventory costs) and stock trading characteristics in the Spanish Stock Exchange (SSE), taking into account the random nature of these costs. First, we analyse the statistical properties of estimated spread components in the market, which are obtained by using two statistical models to decompose the bid–ask spread. We then propose a fractional response regression model based on two flexible cross-sectional probability density functions with covariates which accommodate certain aspects of the empirical estimates, such as skewness and bounded distribution. Our model has two main advantages: (i) it can be implemented easily in a maximum likelihood framework; (ii) in contrast to linear regression models, it provides a useful estimate of the statistical significance of the parameters, and predicts costs not only at the conditional mean but also by using quantiles of the estimated conditional distribution. The empirical results corroborate the presence of statistically significant large order processing costs and smaller adverse selection and inventory costs in the SSE. These spread components have a skewed empirical distribution and the proposed fractional regression models represent the behaviour of these costs reasonably well, surpassing the linear regression model in various specification tests.  相似文献   

6.
The accurate prediction of long-term care insurance (LTCI) mortality, lapse, and claim rates is essential when making informed pricing and risk management decisions. Unfortunately, academic literature on the subject is sparse and industry practice is limited by software and time constraints. In this article, we review current LTCI industry modeling methodology, which is typically Poisson regression with covariate banding/modification and stepwise variable selection. We test the claim that covariate banding improves predictive accuracy, examine the potential downfalls of stepwise selection, and contend that the assumptions required for Poisson regression are not appropriate for LTCI data. We propose several alternative models specifically tailored toward count responses with an excess of zeros and overdispersion. Using data from a large LTCI provider, we evaluate the predictive capacity of random forests and generalized linear and additive models with zero-inflated Poisson, negative binomial, and Tweedie errors. These alternatives are compared to previously developed Poisson regression models.

Our study confirms that variable modification is unnecessary at best and automatic stepwise model selection is dangerous. After demonstrating severe overprediction of LTCI mortality and lapse rates under the Poisson assumption, we show that a Tweedie GLM enables much more accurate predictions. Our Tweedie regression models improve average predictive accuracy (measured by several prediction error statistics) over Poisson regression models by as much as four times for mortality rates and 17 times for lapse rates.  相似文献   


7.
We explore a large sample of analysts' estimates of the cost of equity capital (CoE) to evaluate their usefulness as expected return proxies (ERP). We find that the CoE estimates are significantly related to a firm's beta, size, book-to-market ratio, leverage, and idiosyncratic volatility but not other risk proxies. Even after controlling for the popular return predictors, the CoE estimates incrementally predict future stock returns. This predictive ability is better explained as the CoE estimates containing ERP information rather than reflecting stock mispricing. When evaluated against traditional ERPs, including the implied costs of capital, the CoE estimates are found to be the least noisy. Finally, we document CoE responses around earnings announcements, demonstrating their usefulness to study discount-rate reactions of market participants. We conclude that analysts' CoE estimates are meaningful ERPs that can be fruitfully employed in a variety of asset pricing contexts.  相似文献   

8.
We model the conditional risk premium by combining principal component analysis and a statistical learning technique, known as boosted regression trees. The method is validated through various out‐of‐sample tests. We apply the estimates to test the positivity restriction on the risk premium and find evidence that the risk premium is negative in periods of low corporate and government bond returns, high inflation and downward‐sloping term structure. These periods are linked with changes in business cycles; the states when theories predict the existence of negative risk premium. Based on the evidence, we reject the conditional capital asset pricing model and raise a question over the practice of imposing the positive risk premium constraint in predictive models.  相似文献   

9.
Missing data is a problem that may be faced by actuaries when analysing mortality data. In this paper we deal with pension scheme data, where the future lifetime of each member is modelled by means of parametric survival models incorporating covariates, which may be missing for some individuals. Parameters are estimated by likelihood-based techniques. We analyse statistical issues, such as parameter identifiability, and propose an algorithm to handle the estimation task. Finally, we analyse the financial impact of including covariates maximally, compared with excluding parts of the mortality experience where data are missing; in particular we consider annuity factors and mis-estimation risk capital requirements.  相似文献   

10.
Bankruptcy prediction has received a growing interest in corporate finance and risk management recently. Although numerous studies in the literature have dealt with various statistical and artificial intelligence classifiers, their performance in credit risk forecasting needs to be further scrutinized compared to other methods. In the spirit of Chen, Härdle and Moro (2011, Quantitative Finance), we design an empirical study to assess the effectiveness of various machine learning topologies trained with big data approaches and qualitative, rather than quantitative, information as input variables. The experimental results from a ten-fold cross-validation methodology demonstrate that a generalized regression neural topology yields an accuracy measurement of 99.96%, a sensitivity measure of 99.91% and specificity of 100%. Indeed, this specific model outperformed multi-layer back-propagation networks, probabilistic neural networks, radial basis functions and regression trees, as well as other advanced classifiers. The utilization of advanced nonlinear classifiers based on big data methodologies and machine learning training generates outperforming results compared to traditional methods for bankruptcy forecasting and risk measurement.  相似文献   

11.
In credit default prediction models, the need to deal with time-varying covariates often arises. For instance, in the context of corporate default prediction a typical approach is to estimate a hazard model by regressing the hazard rate on time-varying covariates like balance sheet or stock market variables. If the prediction horizon covers multiple periods, this leads to the problem that the future evolution of these covariates is unknown. Consequently, some authors have proposed a framework that augments the prediction problem by covariate forecasting models. In this paper, we present simple alternatives for multi-period prediction that avoid the burden to specify and estimate a model for the covariate processes. In an application to North American public firms, we show that the proposed models deliver high out-of-sample predictive accuracy.  相似文献   

12.
This study employs a dataset from three German leasing companies with 14,322 defaulted leasing contracts to analyze different approaches to estimating the loss given default (LGD). Using the historical average LGD and simple OLS-regression as benchmarks, we compare hybrid finite mixture models (FMMs), model trees and regression trees and we calculate the mean absolute error, root mean squared error, and the Theil inequality coefficient. The relative estimation accuracy of the methods depends, among other things, on the number of observations and whether in-sample or out-of-sample estimations are considered. The latter is decisive for proper risk management and is required for regulatory purposes. FMMs aim to reproduce the distribution of realized LGDs and, therefore, perform best with respect to in-sample estimations, but they show poor performance with respect to out-of-sample estimations. Model trees, by contrast, are more robust and outperform all other methods if the sample size is sufficiently large.  相似文献   

13.
This study provides evidence on market implied future earnings based on the residual income valuation (RIV) framework and compares these earnings with analyst earnings forecasts for accuracy (absolute forecast error) and bias (signed forecast error). Prior research shows that current stock price reflects future earnings and that analyst forecasts are biased. Thus, how price-based imputed forecasts compare with analyst forecasts is interesting. Using different cost of capital estimates, we use the price-earnings relation and impute firms’ future annual earnings from three residual income (RI) models for up to 5 years. Relative to I/B/E/S analyst forecasts, imputed forecasts from the RI models are less or no more biased when cost of capital is low (equal to a risk-free rate or slightly higher). Analysts slightly outperform these RI models in terms of accuracy for immediate future (1 or 2) years in the forecast horizon but the opposite is true for more distant future years when cost of capital is low. A regression analysis shows that, in explaining future earnings changes, analyst forecasts relative to imputed forecasts do not impound a significant amount of earnings information embedded in current price. In additional tests, we impute future long-term earnings growth rates and find that they are more accurate and less biased than I/B/E/S analyst long-term earnings growth forecasts. Together, the results suggest that the RIV framework can be used to impute a firm’s future earnings that are high in accuracy and low in bias, especially for distant future years.  相似文献   

14.
15.
This article develops two models for predicting the default of Russian Small and Medium-sized Enterprises (SMEs). The most general questions that the article attempts to answer are ‘Can the default risk of Russian SMEs be assessed with a statistical model?’ and ‘Would it sufficiently demonstrate high predictive accuracy?’ The article uses a relatively large data set of financial statements and employs discriminant analysis as a statistical methodology. Default is defined as legal bankruptcy. The basic model contains only financial ratios; it is extended by adding size and age variables. Liquidity and profitability turned out to be the key factors in predicting default. The resulting models have high predictive accuracy and have the potential to be of practical use in Russian SME lending.  相似文献   

16.
We use time-driven activity-based costing (TDABC) to estimate the cost of radiation treatments at the national level. Although TDABC has mostly been applied at the hospital level, we demonstrate its potential to estimate costs at the national level, which can provide health policy recommendations. Contrary to work on reimbursement or charges representing the health care system perspective, we focus on resource costs from the perspective of health care service providers. Using the example of Belgian inputs and results, we discuss development of a TDABC model. We also present insights into the challenges that arose during model design and implementation. Finally, we discuss recent examples of policy implications in Belgium as well as some caveats that should be considered when developing resource allocation models at the national level.  相似文献   

17.
Financial institutions, by and large, rely on the use of machine learning techniques to improve the classic credit risk assessment model for reduction of costs, delivery of faster decisions, guaranteed credit collections, and risk mitigations. As such, several data mining and machine learning approaches have been developed for computation of credit scores over the last few decades. Moreover, the existing rule-based classification algorithms tend to generate a number of rules with a large number of conditions in the antecedent part. However, these algorithms fail to demonstrate high predictive accuracy while balancing coverage and simplicity. Thus, it becomes quite a challenging task for the researchers to generate an optimal rule set with high predictive accuracy. In this paper, we present an effective rule based classification technique for the prediction of credit risk using a novel Biogeography Based Optimization (BBO) method. The novel BBO in the context of rule mining is named as locally and globally tuned biogeography based rule-miner (LGBBO-RuleMiner). This is applied for discovering optimal rule set with high predictive accuracy from the dataset containing both the categorical and continuous attributes. The performance of the proposed algorithm is compared against a variety of rule-miners such as OneR (1R), PART, JRip, Decision Table, Conjunctive Rule, J48, and Random Tree, along with some meta-heuristic based rule mining techniques by considering two credit risk datasets obtained from University of California, Irvine (UCI) repository. It is found from the comparative study that the proposed rule miner in ten independent runs of ten-fold cross validation outperforms all of the aforesaid algorithms in terms of predictive accuracy, coverage, and simplicity.  相似文献   

18.
The positive correlation (PC) test is the standard procedure used in the empirical literature to detect the existence of asymmetric information in insurance markets. This article describes a new tool to implement an extension of the PC test based on a new family of regression models, the multivariate ordered logit, designed to study how the joint distribution of two or more ordered response variables depends on exogenous covariates. We present an application of our proposed extension of the PC test to the Medigap health insurance market in the United States. Results reveal that the risk–coverage association is not homogeneous across coverage and risk categories, and depends on individual socioeconomic and risk preference characteristics.  相似文献   

19.
An important debate in the contemporary accounting literature relates to the relative merits of activity-based versus volume-based product costing methodologies. Traditional volume-based costing systems are said to be flawed and may seriously mislead strategic decision making. Such arguments assume that decision makers use such information in an unproblematic way. This article reports on an experiment designed to investigate whether decision makers are able to overcome data fixation in a setting involving the use of product cost information. In response to criticisms of previous accounting studies of data fixation, subjects received some feedback after each decision, and were rewarded based on performance. The experiment involved subjects making a series of production output decisions based on detailed case information of a hypothetical firm facing different market conditions for each decision. A between-subjects design was utilized with two cost system treatments: activity-based costing (ABC) and traditional costing (TC). It was hypothesized that the group provided with ABC cost data would make 'optimal' decisions and the group provided with TC cost data would overcome fixation. The results of the experiment indicated that there was, in general, evidence of data fixation among TC subjects, but a small number of subjects did adjust to ABC costs. These results are discussed in the light of previous research and some future directions are outlined.  相似文献   

20.
Elevated total cholesterol is well-established as a risk factor for coronary artery disease and cardiovascular mortality. However, less attention is paid to the association between low cholesterol levels and mortality--the low cholesterol paradox. In this paper, restricted cubic splines (RCS) and complex survey methodology are used to show the low-cholesterol paradox is present in the laboratory, examination, and mortality follow-up data from the Third National Health and Nutrition Examination Survey (NHANES III). A series of Cox proportional hazard models, demonstrate that RCS are necessary to incorporate desired covariates while avoiding the use of categorical variables. Valid concerns regarding the accuracy of such predictive models are discussed. The one certain conclusion is that low cholesterol levels are markers for excess mortality, just as are high levels. Restricted cubic splines provide the necessary flexibility to demonstrate the U-shaped relationship between cholesterol and mortality without resorting to binning results. Cox PH models perform well at identifying associations between risk factors and outcomes of interest such as mortality. However, the predictions from such a model may not be as accurate as common statistics suggest and predictive models should be used with caution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号