首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The introduction of the Basel II Accord has had a huge impact on financial institutions, allowing them to build credit risk models for three key risk parameters: PD (probability of default), LGD (loss given default) and EAD (exposure at default). Until recently, credit risk research has focused largely on the estimation and validation of the PD parameter, and much less on LGD modeling. In this first large-scale LGD benchmarking study, various regression techniques for modeling and predicting LGD are investigated. These include one-stage models, such as those built by ordinary least squares regression, beta regression, robust regression, ridge regression, regression splines, neural networks, support vector machines and regression trees, as well as two-stage models which combine multiple techniques. A total of 24 techniques are compared using six real-life loss datasets from major international banks. It is found that much of the variance in LGD remains unexplained, as the average prediction performance of the models in terms of R2 ranges from 4% to 43%. Nonetheless, there is a clear trend that non-linear techniques, and in particular support vector machines and neural networks, perform significantly better than more traditional linear techniques. Also, two-stage models built by a combination of linear and non-linear techniques are shown to have a similarly good predictive power, with the added advantage of having a comprehensible linear model component.  相似文献   

2.
The loss given default (LGD) distribution is known to have a complex structure. Consequently, the parametric approach for its prediction by fitting a density function may suffer a loss of predictive power. To overcome this potential drawback, we use the cumulative probability model (CPM) to predict the LGD distribution. The CPM applies a transformed variable to model the LGD distribution. This transformed variable has a semiparametric structure. It models the predictor effects parametrically. The functional form of the transformation is unspecified. Thus, CPM provides more flexibility and simplicity in modeling the LGD distribution. To implement CPM, we collect a sample of defaulted debts from Moody’s Default and Recovery Database. Given this sample, we use an expanding rolling window approach to investigate the out-of-time performance of CPM and its alternatives. Our results confirm that CPM is better than its alternatives, in the sense of yielding more accurate LGD distribution predictions.  相似文献   

3.
We study the optimal stopping problems embedded in a typical mortgage. Despite a possible non-rational behaviour of the typical borrower of a mortgage, such problems are worth to be solved for the lender to hedge against the prepayment risk, and because many mortgage-backed securities pricing models incorporate this suboptimality via a so-called prepayment function which can depend, at time t, on whether the prepayment is optimal or not. We state the prepayment problem in the context of the optimal stopping theory and present an algorithm to solve the problem via weak convergence of computationally simple trees. Numerical results in the case of the Vasicek model and of the CIR model are also presented. The procedure is extended to the case when both the prepayment as well as the default are possible: in this case, we present a new method of building two-dimensional computationally simple trees, and we apply it to the optimal stopping problem.  相似文献   

4.
This paper studies the role of non-pervasive shocks when forecasting with factor models. To this end, we first introduce a new model that incorporates the effects of non-pervasive shocks, an Approximate Dynamic Factor Model with a sparse model for the idiosyncratic component. Then, we test the forecasting performance of this model both in simulations, and on a large panel of US quarterly data. We find that, when the goal is to forecast a disaggregated variable, which is usually affected by regional or sectorial shocks, it is useful to capture the dynamics generated by non-pervasive shocks; however, when the goal is to forecast an aggregate variable, which responds primarily to macroeconomic, i.e. pervasive, shocks, accounting for non-pervasive shocks is not useful.  相似文献   

5.
Some studies have shown that body mass index (BMI), weight (kg)/height (m)2, has a negative (or no) effect on wage. But BMI representing obesity is a tightly specified function of weight and height, and there is a room for weight given height (i.e. obesity given height) to better explain wage when the tight specification gets relaxed. In this paper, we address the question of weight effect on wage given height, employing two‐wave panel data for white females and adopting a semi‐linear model consisting of a nonparametric function of weight and height and a linear function of the other regressors. We find that there is no weight effect on wage up to the average weight, beyond which a large negative effect kicks in. Linear BMI models give the incorrect impression of the presence of a ‘wage gain’ by becoming slimmer than the average and of a ‘wage loss’ that is less than what it actually is when going above the average. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
We develop and apply a Bayesian model for the loss rates given defaults (LGDs) of European Sovereigns. Financial institutions are in need of LGD forecasts under Pillar II of the regulatory Basel Accord and the downturn in LGD forecasts under Pillar I. Both are challenging for portfolios with a small number of observations such as sovereigns. Our approach comprises parameter risk and generates LGD forecasts under both regular and downturn conditions. With sovereign-specific rating information, we found that average LGD estimates vary between 0.46 and 0.64, while downturn estimates lay between 0.50 and 0.86.  相似文献   

7.
The future revision of capital requirements and a market-consistent valuation of non-hedgeable liabilities lead to an increasing attention on forecasting longevity trends. In this field, many methodologies focus on either modeling mortality or pricing mortality-linked securities (as longevity bonds). Following Lee–Carter method (proposed in 1992), actuarial literature has provided several extensions in order to consider different trends observed in European data set (e.g., the cohort effect). The purpose of the paper is to compare the features of main mortality models proposed over the years. Model selection became indeed a primary task with the aim to identify the “best” model. What is meant by best is controversial, but good selection techniques are usually based on a good balance between goodness of fit and simplicity. In this regard, different criteria, mainly based on residual and projected rates analysis, are here used. For the sake of comparison, main forecasting methods have been applied to deaths and exposures to risk of male Italian population. Weaknesses and strengths have been emphasized, by underlying how various models provide a different goodness of fit according to different data sets. At the same time, the quality and the variability of forecasted rates have been compared by evaluating the effect on annuity values. Results confirm that some models perform better than others, but no single model can be defined as the best method.  相似文献   

8.
Spearman's and Thomson's mathematical controversy over factor theory was forgotten when it became evident that empirical tetrad-differences bound away from zero (and when empirical evidence argued the need for extracting more factors from a matrix). In fact, both their models lead to zero tetraddifferences. Being more interested in the psychological than in the mathematical aspects of Spearman's model. Thompson remained indifferent to mathematical aspects of multiple factor analysis when Thurstone theorized it. Thus, he did not perceive that his counter-example negated the assumption Thrustone shared the rank of the matrix. The idea that components to be extracted must be equal to the rank of the matrix is not assumed in Hotelling's component model: as a result, this is the first epistemological reason for preferring component analysis to factor analysis. A second epistemological reason is the central theorem of Thurstone's multiple-factor model, which can be criticized because it is an assumption that, the rank of a complete matrix being n, it becomes k when commonalities are in the principal diagonal. This assumption goes against common sense, a fact demonstrated through comparison between residuals after k components have been extracted and after k principal factors have been extracted.  相似文献   

9.
Burnout is a consequence of unobservable predictive variables. This paper describes a methodology for estimating mortgage prepayment models which corrects for burnout. The paper generalizes the approach of Deng, Quigley, and Van Order (Econometrica, 68, 275–307, 1998) and Stanton (Rev. Finan. Stud.8, 677–708, 1995) in modeling the impact of unobservable variables as a probability distribution. The estimator is applied to a sample of loan histories and the results compared to a conventional logit analysis of the data. Predictions and simulations from both models are compared to illustrate the properties of the new estimator.  相似文献   

10.
Predicting the risk of mortgage prepayments has been the focus of many studies over the past three decades. Most of these works have used single prediction models, such as logistic regressions and survival models, to seek the key influencing factors. From the point of view of customer relationship management (CRM), a two-stage model (i.e., the segment and prediction model) is proposed for analyzing the risk of mortgage prepayment in this research. In the first stage, random forests are used to segment mortgagors into different groups; then, a proportional hazard model is constructed to predict the prepayment time of the mortgagors in the second stage. The results indicate that the two-stage model predicts mortgage prepayment more accurately than the single-stage model (non-segmentation model).  相似文献   

11.
The flow-refueling location problem for alternative-fuel vehicles   总被引:3,自引:0,他引:3  
Beginning with Hodgson (Geogr.Anal.22(1990) 270), several researchers have been developing a new kind of location-allocation model for “flow capturing.” Instead of locating central facilities to serve demand at fixed points in space, their models aim to serve demand consisting of origin-destination flows along their shortest paths. This paper extends flow-capturing models to optimal location of refueling facilities for alternative-fuel (alt-fuel) vehicles, such as hydrogen fuel cells or natural gas. Existing flow-capturing models assume that if a flow passes just one facility along its path, it is covered. This assumption does not carry over to vehicle refueling because of the limited range of vehicles. For refueling, it may be necessary to stop at more than one facility in order to successfully refuel the entire path, depending on the vehicle range, the path length, and the node spacing. The Flow Refueling Location Model (FRLM) optimally locates p refueling stations on a network so as to maximize the total flow volume refueled. This paper presents a mixed-integer programming formulation for the nodes-only version of the problem, as well as an algorithm for determining all combinations of nodes that can refuel a given path. A greedy-adding approach is demonstrated to be suboptimal, and the tradeoff curve between number of facilities and flow volume refueled is shown to be nonconvex.  相似文献   

12.
This paper deals with models for the duration of an event that are misspecified by the neglect of random multiplicative heterogeneity in the hazard function. This type of misspecification has been widely discussed in the literature [e.g., Heckman and Singer (1982), Lancaster and Nickell (1980)], but no study of its effect on maximum likelihood estimators has been given. This paper aims to provide such a study with particular reference to the Weibull regression model which is by far the most frequently used parametric model [e.g., Heckman and Borjas (1980), Lancaster (1979)]. In this paper we define generalised errors and residuals in the sense of Cox and Snell (1968, 1971) and show how their use materially simplifies the analysis of both true and misspecified duration models. We show that multiplicative heterogeneity in the hazard of the Weibull model has two errors in variables interpretations. We give the exact asymptotic inconsistency of M.L. estimation in the Weibull model and give a general expression for the inconsistency of M.L. estimators due to neglected heterogeneity for any duration model to O(σ2), where σ2 is the variance of the error term. We also discuss the information matrix test for neglected heterogeneity in duration models and consider its behaviour when σ2>0.  相似文献   

13.
The efficiency and equity implications of mortgage interest deductibility have been studied by a number of authors using models of housing demand that do not account for barriers to residential mobility. This paper examines how one's assessment of that proposed tax reform may differ based on models that do allow for such barriers, using data from Toronto, Canada. We find that earlier works tend to overstate the deadweight loss attributable to the introduction of mortgage interest deductibility, particularly in the short run.  相似文献   

14.
Exact tests for rth order serial correlation in the multivariate linear regression model are devised which are based on a multivariate generalization of the F-distribution. The tests require the computation of two multivariate regressions. In the special case of a single-equation regression model the procedures reduce to simple always-conclusive F-tests. The tests are illustrated by applications to the Rotterdam Model of consumer demand.  相似文献   

15.
The recombining binomial tree approach, which has been initiated by Cox et?al. (J Financ Econ 7: 229?C263, 1979) and extended to arbitrary diffusion models by Nelson and Ramaswamy (Rev Financ Stud 3(3): 393?C430, 1990) and Hull and White (J Financ Quant Anal 25: 87?C100, 1990a), is applied to the simultaneous evaluation of price and Greeks for the amortized fixed and variable rate mortgage prepayment option. We consider the simplified binomial tree approximation to arbitrary diffusion processes by Costabile and Massabo (J Deriv 17(3): 65?C85, 2010) and analyze its numerical applicability to the mortgage valuation problem for some Vasicek and CIR-like interest rate models. For fixed rates and binomial trees with about thousand steps, we obtain very good results. For the Vasicek model, we also compare the closed-form analytical approximation of the callable fixed rate mortgage price by Xie (IAENG Int J Appl Math 39(1): 9, 2009) with its binomial tree counterpart. With respect to the binomial tree values one observes a systematic underestimation (overestimation) of the callable mortgage price (prepayment option price) analytical approximation. This numerical discrepancy increases at longer maturities and becomes impractical for a valuable estimation of the prepayment option price.  相似文献   

16.
17.
Although the link between household size and consumption has strong empirical support, there is no consistent way in which demographics are dealt with in standard life-cycle models. We study the relationship between the predictions of the Single Agent model (the standard in the literature) versus a simple model extension (the Demographics model) where deterministic changes in household size and composition affect optimal consumption decisions. We show theoretically that the Demographics model is conceptually preferable to the Single Agent model as it captures economic mechanisms ignored by the latter. However, our quantitative analysis demonstrates that differences in predictions for consumption are negligible across models, when using standard calibration strategies. This suggests that it is largely irrelevant which model specification is used.  相似文献   

18.
This paper models the housing sector, mortgages and endogenous default in a DSGE setting with nominal and real rigidities. We use data for the period 1981–2006 to estimate our model using Bayesian techniques. We analyze how an increase in risk in the mortgage market raises the default rate and spreads to the rest of the economy, creating a recession. In our model two shocks are well suited to replicate the subprime crisis and the Great Recession: the mortgage risk shock and the housing demand shock. Next we use our estimated model to evaluate a policy that reduces the principal of underwater mortgages. This policy is successful in stabilizing the mortgage market and makes all agents better off.  相似文献   

19.
This paper proposes exact distribution-free permutation tests for the specification of a non-linear regression model against one or more possibly non-nested alternatives. The new tests may be validly applied to a wide class of models, including models with endogenous regressors and lag structures. These tests build on the well-known J test developed by Davidson and MacKinnon [1981. Several tests for model specification in the presence of alternative hypotheses. Econometrica 49, 781–793] and their exactness holds under broader assumptions than those underlying the conventional J test. The J-type test statistics are used with a randomization or Monte Carlo resampling technique which yields an exact and computationally inexpensive inference procedure. A simulation experiment confirms the theoretical results and also shows the performance of the new procedure under violations of the maintained assumptions. The test procedure developed is illustrated by an application to inflation dynamics.  相似文献   

20.
By aggregating simple, possibly dependent, dynamic micro-relationships, it is shown that the aggregate series may have univariate long-memory models and obey integrated, or infinite length transfer function relationships. A long-memory time series model is one having spectrum or order ω-2d for small frequencies ω, d>0. These models have infinite variance for d≧12 but finite variance for d<12. For d=1 the series that need to be differenced to achieve stationarity occur, but this case is not found to occur from aggregation. It is suggested that if series obeying such models occur in practice, from aggregation, then present techniques being used for analysis are not appropriate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号