首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
Insurance claims have deductibles, which must be considered when pricing for insurance premium. The deductible may cause censoring and truncation to the insurance claims. However, modeling the unobserved response variable using maximum likelihood in this setting may be a challenge in practice. For this reason, a practitioner may perform a regression using the observed response, in order to calculate the deductible rates using the regression coefficients. A natural question is how well this approach performs, and how it compares to the theoretically correct approach to rating the deductibles. Also, a practitioner would be interested in a systematic review of the approaches to modeling the deductible rates. In this article, an overview of deductible ratemaking is provided, and the pros and cons of two deductible ratemaking approaches are compared: the regression approach and the maximum likelihood approach. The regression approach turns out to have an advantage in predicting aggregate claims, whereas the maximum likelihood approach has an advantage when calculating theoretically correct relativities for deductible levels beyond those observed by empirical data. For demonstration, loss models are fit to the Wisconsin Local Government Property Insurance Fund data, and examples are provided for the ratemaking of per-loss deductibles offered by the fund. The article discovers that the regression approach is actually a single-parameter approximation to the true relativity curve. A comparison of selected models from the generalized beta family discovers that the usage of long-tail severity distributions may improve the deductible rating, while advanced frequency models such as 01-inflated models may have limited advantages due to estimation issues under censoring and truncation. In addition, in this article, models for specific peril types are combined to improve the ratemaking.  相似文献   

2.
The CreditRisk+ model is widely used in industry for computing the loss of a credit portfolio. The standard CreditRisk+ model assumes independence among a set of common risk factors, a simplified assumption that leads to computational ease. In this article, we propose to model the common risk factors by a class of multivariate extreme copulas as a generalization of bivariate Fréchet copulas. Further we present a conditional compound Poisson model to approximate the credit portfolio and provide a cost-efficient recursive algorithm to calculate the loss distribution. The new model is more flexible than the standard model, with computational advantages compared to other dependence models of risk factors.  相似文献   

3.
Abstract

This case study illustrates the analysis of two possible regression models for bivariate claims data. Estimates or forecasts of loss distributions under these two models are developed using two methods of analysis: (1) maximum likelihood estimation and (2) the Bayesian method. These methods are applied to two data sets consisting of 24 and 1,500 paired observations, respectively. The Bayesian analyses are implemented using Markov chain Monte Carlo via WinBUGS, as discussed in Scollnik (2001). A comparison of the analyses reveals that forecasted total losses can be dramatically underestimated by the maximum likelihood estimation method because it ignores the inherent parameter uncertainty.  相似文献   

4.
Abstract

By claims experience monitoring is meant the systematic comparison of the forecasts from a claims model with claims experience as it emerges subsequently. In the event that the stochastic properties of the forecasts are known, the comparison can be represented as a collection of probabilistic statements. This is stochastic monitoring. This paper defines this process rigorously in terms of statistical hypothesis testing. If the model is a regression model (which is the case for most stochastic claims models), then the natural form of hypothesis test is a number of likelihood ratio tests, one for each parameter in the valuation model. Such testing is shown to be very easily implemented by means of generalized linear modeling software. This tests the formal structure of the claims model and is referred to as microtesting. There may be other quantities (e.g., amount of claim payments in a defined interval) that require testing for practical reasons. This sort of testing is referred to as macrotesting, and its formulation is also discussed.  相似文献   

5.
Longitudinal modeling of insurance claim counts using jitters   总被引:1,自引:0,他引:1  
Modeling insurance claim counts is a critical component in the ratemaking process for property and casualty insurance. This article explores the usefulness of copulas to model the number of insurance claims for an individual policyholder within a longitudinal context. To address the limitations of copulas commonly attributed to multivariate discrete data, we adopt a ‘jittering’ method to the claim counts which has the effect of continuitizing the data. Elliptical copulas are proposed to accommodate the intertemporal nature of the ‘jittered’ claim counts and the unobservable subject-specific heterogeneity on the frequency of claims. Observable subject-specific effects are accounted in the model by using available covariate information through a regression model. The predictive distribution together with the corresponding credibility of claim frequency can be derived from the model for ratemaking and risk classification purposes. For empirical illustration, we analyze an unbalanced longitudinal dataset of claim counts observed from a portfolio of automobile insurance policies of a general insurer in Singapore. We further establish the validity of the calibrated copula model, and demonstrate that the copula with ‘jittering’ method outperforms standard count regression models.  相似文献   

6.
The accurate prediction of long-term care insurance (LTCI) mortality, lapse, and claim rates is essential when making informed pricing and risk management decisions. Unfortunately, academic literature on the subject is sparse and industry practice is limited by software and time constraints. In this article, we review current LTCI industry modeling methodology, which is typically Poisson regression with covariate banding/modification and stepwise variable selection. We test the claim that covariate banding improves predictive accuracy, examine the potential downfalls of stepwise selection, and contend that the assumptions required for Poisson regression are not appropriate for LTCI data. We propose several alternative models specifically tailored toward count responses with an excess of zeros and overdispersion. Using data from a large LTCI provider, we evaluate the predictive capacity of random forests and generalized linear and additive models with zero-inflated Poisson, negative binomial, and Tweedie errors. These alternatives are compared to previously developed Poisson regression models.

Our study confirms that variable modification is unnecessary at best and automatic stepwise model selection is dangerous. After demonstrating severe overprediction of LTCI mortality and lapse rates under the Poisson assumption, we show that a Tweedie GLM enables much more accurate predictions. Our Tweedie regression models improve average predictive accuracy (measured by several prediction error statistics) over Poisson regression models by as much as four times for mortality rates and 17 times for lapse rates.  相似文献   


7.
We consider a risk process R t where the claim arrival process is a superposition of a homogeneous Poisson process and a Cox process with a Poisson shot noise intensity process, capturing the effect of sudden increases of the claim intensity due to external events. The distribution of the aggregate claim size is investigated under these assumptions. For both light-tailed and heavy-tailed claim size distributions, asymptotic estimates for infinite-time and finite-time ruin probabilities are derived. Moreover, we discuss an extension of the model to an adaptive premium rule that is dynamically adjusted according to past claims experience.  相似文献   

8.
In commercial banking, various statistical models for corporate credit rating have been theoretically promoted and applied to bank-specific credit portfolios. In this paper, we empirically compare and test the performance of a wide range of parametric and nonparametric credit rating model approaches in a statistically coherent way, based on a ‘real-world’ data set. We repetitively (k times) split a large sample of industrial firms’ default data into disjoint training and validation subsamples. For all model types, we estimate k out-of-sample discriminatory power measures, allowing us to compare the models coherently. We observe that more complex and nonparametric approaches, such as random forest, neural networks, and generalized additive models, perform best in-sample. However, comparing k out-of-sample cross-validation results, these models overfit and lose some of their predictive power. Rather than improving discriminatory power, we perceive their major contribution to be their usefulness as diagnostic tools for the selection of rating factors and the development of simpler, parametric models.
Stefan DenzlerEmail:
  相似文献   

9.
We study a dynamic insurance market with asymmetric information and ex post moral hazard. In our model, the insurance buyer's risk type is unknown to the insurer; moreover, the buyer has the option of not reporting losses. The insurer sets premia according to the buyer's experience rating, computed via Bayesian estimation based on buyer's history of reported claims. Accordingly, the buyer has strategic incentive to withhold information about losses. We construct an insurance market information equilibrium model and show that a variety of reporting strategies are possible. The results are illustrated with explicit computations in a two‐period risk‐neutral case study.  相似文献   

10.
Abstract

As is well known in actuarial practice, excess claims (outliers) have a disturbing effect on the ratemaking process. To obtain better estimators of premiums, which are based on credibility theory, Künsch and Gisler and Reinhard suggested using robust methods. The estimators proposed by these authors are indeed resistant to outliers and serve as an excellent example of how useful robust models can be for insurance pricing. In this article we further refine these procedures by reducing the degree of heuristic arguments they involve. Specifically we develop a class of robust estimators for the credibility premium when claims are approximately gamma-distributed and thoroughly study their robustness-efficiency trade-offs in large and small samples. Under specific datagenerating scenarios, this approach yields quantitative indices of estimators’ strength and weakness, and it allows the actuary (who is typically equipped with information beyond the statistical model) to choose a procedure from a full menu of possibilities. Practical performance of our methods is illustrated under several simulated scenarios and by employing expert judgment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号