首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 297 毫秒
1.
The accurate prediction of long-term care insurance (LTCI) mortality, lapse, and claim rates is essential when making informed pricing and risk management decisions. Unfortunately, academic literature on the subject is sparse and industry practice is limited by software and time constraints. In this article, we review current LTCI industry modeling methodology, which is typically Poisson regression with covariate banding/modification and stepwise variable selection. We test the claim that covariate banding improves predictive accuracy, examine the potential downfalls of stepwise selection, and contend that the assumptions required for Poisson regression are not appropriate for LTCI data. We propose several alternative models specifically tailored toward count responses with an excess of zeros and overdispersion. Using data from a large LTCI provider, we evaluate the predictive capacity of random forests and generalized linear and additive models with zero-inflated Poisson, negative binomial, and Tweedie errors. These alternatives are compared to previously developed Poisson regression models.

Our study confirms that variable modification is unnecessary at best and automatic stepwise model selection is dangerous. After demonstrating severe overprediction of LTCI mortality and lapse rates under the Poisson assumption, we show that a Tweedie GLM enables much more accurate predictions. Our Tweedie regression models improve average predictive accuracy (measured by several prediction error statistics) over Poisson regression models by as much as four times for mortality rates and 17 times for lapse rates.  相似文献   


2.
This paper employs a multinomial logit model in an attempt to better understand the motives behind takeovers. The results from the multinomial logit models show that the characteristics of hostile and friendly targets differ significantly and that these differences also vary depending on the time period under investigation. The results give some support to the disciplining role of the hostile takeover. Furthermore, conclusions based on a simple binomial logit model are likely to be misleading and result in incorrect inferences regarding the characteristics of firms subject to takeover.  相似文献   

3.
4.
Portfolio credit risk models as well as models for operational risk can often be treated analogously to the collective risk model coming from insurance. Applying the classical Panjer recursion in the collective risk model can lead to numerical instabilities, for instance if the claim number distribution is extended negative binomial or extended logarithmic. We present a generalization of Panjer’s recursion that leads to numerically stable algorithms. The algorithm can be applied to the collective risk model, where the claim number follows, for example, a Poisson distribution mixed over a generalized tempered stable distribution with exponent in (0,1). De Pril’s recursion can be generalized in the same vein. We also present an analogue of our method for the collective model with a severity distribution having mixed support.  相似文献   

5.
Recent studies have extensively used the logit or probit models for classification problems in accounting and finance. More than 289 articles in prestigious journals have used these or similar methods from 1989 through 1996. This paper reviews several categorical techniques and compares the performance of logit or probit with alternative procedures. Intuitive and mathematical explanations of how the models examined differ in terms of underlying assumptions and other attributes are provided. The alternative techniques are applied to two substantive research questions: predicting bankruptcy and auditors' consistency judgements. Four empirical criteria provide some evidence that the exponential generalized beta of the second kind (EGB2), lomit, and burrit (all new to the accounting and finance literature) improve the log-likelihood functions, and the explanatory power, compared with logit and other models. EGB2, lomit and burrit also provide significantly better classifications and predictions than logit and other techniques.  相似文献   

6.
This paper estimates an early warning system (EWS) for predicting systemic banking crises in a sample of low income countries in Sub-Saharan Africa. Since the average duration of crises in this sample of countries is longer than one year, the predictive performance of standard binomial logit models is likely to be hampered by the so-called crisis duration bias. The bias arises from the decision to either treat crisis years after the onset of a crisis as non-crisis years or remove them altogether from the model. To overcome this potential drawback, we propose a multinomial logit approach, which is shown to improve the predictive power of our EWS compared to the binomial logit model. Our results suggest that crisis events in low income countries are associated with low economic growth, drying up of banking system liquidity and widening of foreign exchange net open positions.  相似文献   

7.
This paper emphasises the inappropriateness of continuous measure predictors for both the logit and MDA models when dealing with the measurement errors that exist in much of the private company data used to model financial distress in that sector. Also, it is argued that the step function logit model that we get as a consequence of the necessity to categorise the predictors, may be more appropriate in explaining underlying nonlinear behaviour of firms at risk than the usual continuous response linear function. Within this context, the two models are compared using data from 140 private Australian companies. A logit model based on only three discrete-valued ratios gave an overall accuracy rate comparable to that of an existing continuous-valued multiple discriminant analysis (MDA) model based on six ratios. Of interest is the very different order of significance of the predictor ratios in the two models although neither model remains trustworthy for predictive purposes.  相似文献   

8.
The third cumulant for the aggregated multivariate claims is considered. A formula is presented for the general case when the aggregating variable is independent of the multivariate claims. Two important special cases are considered. In the first one, multivariate skewed normal claims are considered and aggregated by a Poisson variable. The second case is dealing with multivariate asymmetric generalized Laplace and aggregation is made by a negative binomial variable. Due to the invariance property the latter case can be derived directly, leading to the identity involving the cumulant of the claims and the aggregated claims. There is a well-established relation between asymmetric Laplace motion and negative binomial process that corresponds to the invariance principle of the aggregating claims for the generalized asymmetric Laplace distribution. We explore this relation and provide multivariate continuous time version of the results. It is discussed how these results that deal only with dependence in the claim sizes can be used to obtain a formula for the third cumulant for more complex aggregate models of multivariate claims in which the dependence is also in the aggregating variables.  相似文献   

9.
Abstract

Dufresne et al. (1991) introduced a general risk model defined as the limit of compound Poisson processes. Such a model is either a compound Poisson process itself or a process with an infinite number of small jumps. Later, in a series of now classical papers, the joint distribution of the time of ruin, the surplus before ruin, and the deficit at ruin was studied (Gerber and Shiu 1997, 1998a, 1998b; Gerber and Landry 1998). These works use the classical and the perturbed risk models and hint that the results can be extended to gamma and inverse Gaussian risk processes.

In this paper we work out this extension to a generalized risk model driven by a nondecreasing Lévy process. Unlike the classical case that models the individual claim size distribution and obtains from it the aggregate claims distribution, here the aggregate claims distribution is known in closed form. It is simply the one-dimensional distribution of a subordinator. Embedded in this wide family of risk models we find the gamma, inverse Gaussian, and generalized inverse Gaussian processes. Expressions for the Gerber-Shiu function are given in some of these special cases, and numerical illustrations are provided.  相似文献   

10.
We investigate developments in Danish mortality based on data from 1974–1998 working in a two-dimensional model with chronological time and age as the two dimensions. The analyses are done with non-parametric kernel hazard estimation techniques. The only assumption is that the mortality surface is smooth. Cross-validation is applied for optimal bandwidth selection to ensure the proper amount of smoothing to help distinguishing between random and systematic variation in data. A bootstrap technique is used for construction of pointwise confidence bounds. We study the mortality profiles by slicing up the two-dimensional mortality surface. Furthermore we look at aggregated synthetic population metrics as ‘population life expectancy’ and ‘population survival probability’. For Danish women these metrics indicate decreasing mortality with respect to chronological time. The metrics can not directly be used for prediction purposes. However, we suggest that life insurance companies use the estimation technique and the cross-validation for bandwidth selection when analyzing their portfolio mortality. The non-parametric approach may give valuable information prior to developing more sophisticated prediction models for analysis of economic implications arising from mortality changes.  相似文献   

11.
Option pricing models based on an underlying lognormal distribution typically exhibit volatility smiles or smirks where the implied volatility varies by strike price. To adequately model the underlying distribution, a less restrictive model is needed. A relaxed binomial model is developed here that can account for the skewness of the underlying distribution and a relaxed trinomial model is developed that can account for the skewness and kurtosis of the underlying distribution. The new model incorporates the usual binomial and trinomial tree models as restricted special cases. Unlike previous flexible tree models, the size and probability of jumps are held constant at each node so only minor modifications in existing code for lattice models are needed to implement the new approach. Also, the new approach allows calculating implied skewness and implied kurtosis. Numerical results show that the relaxed binomial and trinomial tree models developed in this study are at least as accurate as tree models based on lognormality when the true underlying distribution is lognormal and substantially more accurate when the underlying distribution is not lognormal.  相似文献   

12.

This paper derives two-sided bounds for tails of compound negative binomial distributions, both in the exponential and heavy-tailed cases. Two approaches are employed to derive the two-sided bounds in the case of exponential tails. One is the convolution technique, as in Willmot & Lin (1997). The other is based on an identity of compound negative binomial distributions; they can be represented as a compound Poisson distribution with a compound logarithmic distribution as the underlying claims distribution. This connection between the compound negative binomial, Poisson and logarithmic distributions results in two-sided bounds for the tails of the compound negative binomial distribution, which also generalize and improve a result of Willmot & Lin (1997). For the heavy-tailed case, we use the method developed by Cai & Garrido (1999b). In addition, we give two-sided bounds for stop-loss premiums of compound negative binomial distributions. Furthermore, we derive bounds for the stop-loss premiums of general compound distributions among the classes of HNBUE and HNWUE.  相似文献   

13.
Demographic projections of future mortality rates involve a high level of uncertainty and require stochastic mortality models. The current paper investigates forward mortality models driven by a (possibly infinite-dimensional) Wiener process and a compensated Poisson random measure. A major innovation of the paper is the introduction of a family of processes called forward mortality improvements which provide a flexible tool for a simple construction of stochastic forward mortality models. In practice, the notion of mortality improvements is a convenient device for the quantification of changes in mortality rates over time, and enables, for example, the detection of cohort effects. We show that the forward mortality rates satisfy Heath–Jarrow–Morton-type consistency conditions which translate to conditions on the forward mortality improvements. While the consistency conditions for the forward mortality rates are analogous to the classical conditions in the context of bond markets, the conditions for the forward mortality improvements possess a different structure. Forward mortality models include a cohort parameter besides the time horizon, and these two dimensions are coupled in the dynamics of consistent models of forward mortality improvements. In order to obtain a unified framework, we transform the systems of Itô processes which describe the forward mortality rates and improvements. In contrast to term structure models, the corresponding stochastic partial differential equations (SPDEs) describe the random dynamics of two-dimensional surfaces rather than curves.  相似文献   

14.
Predicting default risk is important for firms and banks to operate successfully. There are many reasons to use nonlinear techniques for predicting bankruptcy from financial ratios. Here we propose the so-called Support Vector Machine (SVM) to predict the default risk of German firms. Our analysis is based on the Creditreform database. In all tests performed in this paper the nonlinear model classified by SVM exceeds the benchmark logit model, based on the same predictors, in terms of the performance metric, AR. The empirical evidence is in favor of the SVM for classification, especially in the linear non-separable case. The sensitivity investigation and a corresponding visualization tool reveal that the classifying ability of SVM appears to be superior over a wide range of SVM parameters. In terms of the empirical results obtained by SVM, the eight most important predictors related to bankruptcy for these German firms belong to the ratios of activity, profitability, liquidity, leverage and the percentage of incremental inventories. Some of the financial ratios selected by the SVM model are new because they have a strong nonlinear dependence on the default risk but a weak linear dependence that therefore cannot be captured by the usual linear models such as the DA and logit models.  相似文献   

15.
Smooth convergence in the binomial model   总被引:1,自引:0,他引:1  
In this article, we consider a general class of binomial models with an additional parameter λ. We show that in the case of a European call option the binomial price converges to the Black–Scholes price at the rate 1/n and, more importantly, give a formula for the coefficient of 1/n in the expansion of the error. This enables us, by making special choices for λ, to prove that convergence is smooth in Tian’s flexible binomial model and also in a new center binomial model which we propose. Ken Palmer was supported by NSC grant 93-2118-M-002-002.  相似文献   

16.
Abstract:  Econometric models involving a discrete outcome dependent variable abound in the finance and accounting literatures. However, much of the literature to date utilises a basic or standard logit model. Capitalising on recent developments in the discrete choice literature, we examine three advanced (or non-IID) logit models, namely: nested logit, mixed logit and latent class MNL. Using an illustration from corporate takeovers research, we compare the explanatory and predictive performance of each class of advanced model relative to the standard model. We find that in all cases the more advanced logit model structures, which correct for the highly restrictive IID and IIA conditions, provide significantly greater explanatory power than standard logit. Mixed logit and latent class MNL models exhibited the highest overall predictive accuracy on a holdout sample, while the standard logit model performed the worst. Moreover, the analysis of marginal effects of all models indicates that use of advanced models can lead to more insightful and behaviourally meaningful interpretations of the role and influence of explanatory variables and parameter estimates in model estimation. The results of this paper have implications for the use of more optimal logit structures in future research and practice.  相似文献   

17.
The interrelation between the drift coefficient of price processes on arbitrage-free financial markets and the corresponding transition probabilities induced by a martingale measure is analysed in a discrete setup. As a result, we obtain a flexible setting that encompasses most arbitrage-free binomial models. It is argued that knowledge of the link between drift and transition probabilities may be useful for pricing derivatives such as barrier options. The idea is illustrated in a simple example and later extended to a general numerical procedure. The results indicate that the option values in our fitted drift model converge much faster to closed-form solutions of continuous models for a wider range of contract specifications than those of conventional binomial models.  相似文献   

18.
In recent studies, Jones and Hensher (2004 , 2005) provide an illustration of the usefulness of advanced probability modelling in the prediction of corporate bankruptcies, insolvencies and takeovers. Mixed logit (or random parameter logit) is the most general of these models and appears to have the greatest promise in terms of underlying behavioural realism, desirable econometric properties and overall predictive performance. It suggests a number of empirical considerations relevant to harnessing the maximum potential from this new model (as well as avoiding some of the more obvious pitfalls associated with its use). Using a three-state failure model, the unconditional triangular distribution for random parameters offers the best population-level predictive performance on a hold-out sample. Further, the optimal performance for a mixed logit model arises when a weighted exogenous sample maximum likelihood (WESML) technique is applied in model estimation. Finally, we suggest an approach for testing the stability of mixed logit models by re-estimating a selected model using varying numbers of Halton intelligent draws. Our results have broad application to users seeking to apply more accurate and reliable forecasting methodologies to explain and predict sources of firm financial distress better.  相似文献   

19.
Bivariate distributions, specified in terms of their conditional distributions, provide a powerful tool to obtain flexible distributions. These distributions play an important role in specifying the conjugate prior in certain multi-parameter Bayesian settings. In this paper, the conditional specification technique is applied to look for more flexible distributions than the traditional ones used in the actuarial literature, as the Poisson, negative binomial and others. The new specification draws inferences about parameters of interest in problems appearing in actuarial statistics. Two unconditional (discrete) distributions obtained are studied and used in the collective risk model to compute the right-tail probability of the aggregate claim size distribution. Comparisons with the compound Poisson and compound negative binomial are made.  相似文献   

20.
This paper introduces a parameterization of the normal mixture diffusion (NMD) local volatility model that captures only a short-term smile effect, and then extends the model so that it also captures a long-term smile effect. We focus on the ‘binomial’ NMD parameterization, so-called because it is based on simple and intuitive assumptions that imply the mixing law for the normal mixture log price density is binomial. With more than two possible states for volatility, the general parameterization is related to the multinomial mixing law. In this parsimonious class of complete market models, option pricing and hedging is straightforward since model prices and deltas are simple weighted averages of Black–Scholes prices and deltas. But they only capture a short-term smile effect, where leptokurtosis in the log price density decreases with term, in accordance with the ‘stylised facts’ of econometric analysis on ex-post returns of different frequencies and the central limit theorem. However, the last part of the paper shows that longer term smile effects that arise from uncertainty in the local volatility surface can be modeled by a natural extension of the binomial NMD parameterization. Results are illustrated by calibrating the model to several Euro–US dollar currency option smile surfaces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号