首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Insurance claims have deductibles, which must be considered when pricing for insurance premium. The deductible may cause censoring and truncation to the insurance claims. However, modeling the unobserved response variable using maximum likelihood in this setting may be a challenge in practice. For this reason, a practitioner may perform a regression using the observed response, in order to calculate the deductible rates using the regression coefficients. A natural question is how well this approach performs, and how it compares to the theoretically correct approach to rating the deductibles. Also, a practitioner would be interested in a systematic review of the approaches to modeling the deductible rates. In this article, an overview of deductible ratemaking is provided, and the pros and cons of two deductible ratemaking approaches are compared: the regression approach and the maximum likelihood approach. The regression approach turns out to have an advantage in predicting aggregate claims, whereas the maximum likelihood approach has an advantage when calculating theoretically correct relativities for deductible levels beyond those observed by empirical data. For demonstration, loss models are fit to the Wisconsin Local Government Property Insurance Fund data, and examples are provided for the ratemaking of per-loss deductibles offered by the fund. The article discovers that the regression approach is actually a single-parameter approximation to the true relativity curve. A comparison of selected models from the generalized beta family discovers that the usage of long-tail severity distributions may improve the deductible rating, while advanced frequency models such as 01-inflated models may have limited advantages due to estimation issues under censoring and truncation. In addition, in this article, models for specific peril types are combined to improve the ratemaking.  相似文献   

2.
Longitudinal modeling of insurance claim counts using jitters   总被引:1,自引:0,他引:1  
Modeling insurance claim counts is a critical component in the ratemaking process for property and casualty insurance. This article explores the usefulness of copulas to model the number of insurance claims for an individual policyholder within a longitudinal context. To address the limitations of copulas commonly attributed to multivariate discrete data, we adopt a ‘jittering’ method to the claim counts which has the effect of continuitizing the data. Elliptical copulas are proposed to accommodate the intertemporal nature of the ‘jittered’ claim counts and the unobservable subject-specific heterogeneity on the frequency of claims. Observable subject-specific effects are accounted in the model by using available covariate information through a regression model. The predictive distribution together with the corresponding credibility of claim frequency can be derived from the model for ratemaking and risk classification purposes. For empirical illustration, we analyze an unbalanced longitudinal dataset of claim counts observed from a portfolio of automobile insurance policies of a general insurer in Singapore. We further establish the validity of the calibrated copula model, and demonstrate that the copula with ‘jittering’ method outperforms standard count regression models.  相似文献   

3.
A three-parameter generalization of the Pareto distribution is presented with density function having a flexible upper tail in modeling loss payment data. This generalized Pareto distribution will be referred to as the Odd Pareto distribution since it is derived by considering the distributions of the odds of the Pareto and inverse Pareto distributions. Basic properties of the Odd Pareto distribution (OP) are studied. Model parameters are estimated using both modified and regular maximum likelihood methods. Simulation studies are conducted to compare the OP with the exponentiated Pareto, Burr, and Kumaraswamy distributions using two different test statistics based on the ml method. Furthermore, two examples from the Norwegian fire insurance claims data-set are provided to illustrate the upper tail flexibility of the distribution. Extensions of the Odd Pareto distribution are also considered to improve the fitting of data.  相似文献   

4.
Abstract

We present an application of the reversible jump Markov chain Monte Carlo (RJMCMC) method to the important problem of setting claims reserves in general insurance business for the outstanding loss liabilities. A measure of the uncertainty in these claims reserves estimates is also needed for solvency purposes. The RJMCMC method described in this paper represents an improvement over the manual processes often employed in practice. In particular, our RJMCMC method describes parameter reduction and tail factor estimation in the claims reserving process, and, moreover, it provides the full predictive distribution of the outstanding loss liabilities.  相似文献   

5.
The insurance industry is concerned with the detection of fraudulent behavior. The number of automobile claims involving some kind of suspicious circumstance is high and has become a subject of major interest for companies. This article demonstrates the performance of binary choice models for fraud detection and implements models for misclassification in the response variable. A database from the Spanish insurance market that contains honest and fraudulent claims is used. The estimation of the probability of omission provides an estimate of the percentage of fraudulent claims that are not detected by the logistic regression model.  相似文献   

6.
In this paper, a new class of composite model is proposed for modeling actuarial claims data of mixed sizes. The model is developed using the Stoppa distribution and a mode-matching procedure. The use of the Stoppa distribution allows for more flexibility over the thickness of the tail, and the mode-matching procedure gives a simple derivation of the model compositing with a variety of distributions. In particular, the Weibull–Stoppa and the Lognormal–Stoppa distributions are investigated. Their performance is compared with existing composite models in the context of the well-known Danish fire insurance data-set. The results suggest the composite Weibull–Stoppa model outperforms the existing composite models in all seven goodness-of-fit measures considered.  相似文献   

7.
Abstract

Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes.

This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction.

We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.  相似文献   

8.
This article describes a group project that was designed and implemented in an MBA-level corporate risk management class. The primary objective of this project is to integrate the concepts of enterprise risk management into the graduate-level risk management and insurance curriculum. This project combines both traditional and innovative risk management techniques into one semester-long group case study. The Delta Air Lines case study was divided into three segments to focus on three distinct objectives. The first component, identification of Delta's horizon risks, is designed to spur creative thinking among the groups. The second component, analysis of workers' compensation claims, is a very traditional risk management exercise in risk evaluation designed to utilize traditional statistical analysis techniques (specifically, trending). The third component is estimating both total loss distributions and layers of loss due to airline crashes for potential capital market risk financing alternatives. This component involves more innovative financial risk management techniques (i.e., distribution fitting and simulation analysis). The objective is to familiarize students with the current techniques being used to evaluate risks that are currently (or potentially) being securitized in the capital markets.  相似文献   

9.
Despite its wide use, the Hill estimator and its plot remain to be difficult to use in Extreme Value Theory (EVT) due to substantial sampling variations in extreme sample quantiles. In this paper, we propose a new plot we call the eigenvalue plot which can be seen as a generalization of the Hill plot. The theory behind the plot is based on a heavy-tailed parametric distribution class called the scaled Log phase-type (LogPH) distributions, a generalization of the ordinary LogPH distribution class which was previously used to model insurance claims data. We show that its tail property and moment condition are well aligned with EVT. Based on our findings, we construct the eigenvalue plot from fitting a shifted PH distribution to the excess log data with a minimal phase size. Through various numerical examples we illustrate and compare our method against the Hill plot.  相似文献   

10.
Two-part models based on generalized linear models are widely used in insurance rate-making for predicting the expected loss. This paper explores an alternative method based on quantile regression which provides more information about the loss distribution and can be also used for insurance underwriting. Quantile regression allows estimating the aggregate claim cost quantiles of a policy given a number of covariates. To do so, a first stage is required, which involves fitting a logistic regression to estimate, for every policy, the probability of submitting at least one claim. The proposed methodology is illustrated using a portfolio of car insurance policies. This application shows that the results of the quantile regression are highly dependent on the claim probability estimates. The paper also examines an application of quantile regression to premium safety loading calculation, the so-called Quantile Premium Principle (QPP). We propose a premium calculation based on quantile regression which inherits the good properties of the quantiles. Using the same insurance portfolio data-set, we find that the QPP captures the riskiness of the policies better than the expected value premium principle.  相似文献   

11.
Point and interval estimation of future disability inception and recovery rates is predominantly carried out by combining generalized linear models with time series forecasting techniques into a two-step method involving parameter estimation from historical data and subsequent calibration of a time series model. This approach may lead to both conceptual and numerical problems since any time trend components of the model are incoherently treated as both model parameters and realizations of a stochastic process. We suggest that this general two-step approach can be improved in the following way: First, we assume a stochastic process form for the time trend component. The corresponding transition densities are then incorporated into the likelihood, and the model parameters are estimated using the Expectation-Maximization algorithm. We illustrate the modeling procedure by fitting the model to Swedish disability claims data.  相似文献   

12.
Accurate prediction of future claims is a fundamentally important problem in insurance. The Bayesian approach is natural in this context, as it provides a complete predictive distribution for future claims. The classical credibility theory provides a simple approximation to the mean of that predictive distribution as a point predictor, but this approach ignores other features of the predictive distribution, such as spread, that would be useful for decision making. In this article, we propose a Dirichlet process mixture of log-normals model and discuss the theoretical properties and computation of the corresponding predictive distribution. Numerical examples demonstrate the benefit of our model compared to some existing insurance loss models, and an R code implementation of the proposed method is also provided.  相似文献   

13.
We forecast portfolio risk for managing dynamic tail risk protection strategies, based on extreme value theory, expectile regression, copula‐GARCH and dynamic generalized autoregressive score models. Utilizing a loss function that overcomes the lack of elicitability for expected shortfall, we propose a novel expected shortfall (and value‐at‐risk) forecast combination approach, which dominates simple and sophisticated standalone models as well as a simple average combination approach in modeling the tail of the portfolio return distribution. While the associated dynamic risk targeting or portfolio insurance strategies provide effective downside protection, the latter strategies suffer less from inferior risk forecasts, given the defensive portfolio insurance mechanics.  相似文献   

14.
The probabilistic behavior of the claim severity variable plays a fundamental role in calculation of deductibles, layers, loss elimination ratios, effects of inflation, and other quantities arising in insurance. Among several alternatives for modeling severity, the parametric approach continues to maintain the leading position, which is primarily due to its parsimony and flexibility. In this article, several parametric families are employed to model severity of Norwegian fire claims for the years 1981 through 1992. The probability distributions we consider include generalized Pareto, lognormal-Pareto (two versions), Weibull-Pareto (two versions), and folded-t. Except for the generalized Pareto distribution, the other five models are fairly new proposals that recently appeared in the actuarial literature. We use the maximum likelihood procedure to fit the models and assess the quality of their fits using basic graphical tools (quantile-quantile plots), two goodness-of-fit statistics (Kolmogorov-Smirnov and Anderson-Darling), and two information criteria (AIC and BIC). In addition, we estimate the tail risk of “ground up” Norwegian fire claims using the value-at-risk and tail-conditional median measures. We monitor the tail risk levels over time, for the period 1981 to 1992, and analyze predictive performances of the six probability models. In particular, we compute the next-year probability for a few upper tail events using the fitted models and compare them with the actual probabilities.  相似文献   

15.
In non-life insurance, the provision for outstanding claims (the claims reserve) should include future loss adjustment expenses, i.e. administrative expenses to settle the claims, and therefore we have to estimate the expected Unallocated Loss Adjustment Expenses (ULAE) – expenses that are not attributable to individual claims, such as salaries at the claims handling department. The ULAE reserve has received little attention from European actuaries in the literature, supposedly because of the lack of detailed data for estimation and evaluation. Having good estimation procedures will, however, become even more important with the introduction of the Solvency II regulations, which require unbiased estimation of future cash flows for all expenses. We present a model for ULAE at the individual claim level that includes both fixed and variable costs. This model leads to an estimate of the ULAE reserve at the aggregate (line-of-business) level, as demonstrated in a numerical example from a Swedish non-life insurer.  相似文献   

16.
We study tail estimation in Pareto-like settings for datasets with a high percentage of randomly right-censored data, and where some expert information on the tail index is available for the censored observations. This setting arises for instance naturally for liability insurance claims, where actuarial experts build reserves based on the specificity of each open claim, which can be used to improve the estimation based on the already available data points from closed claims. Through an entropy-perturbed likelihood, we derive an explicit estimator and establish a close analogy with Bayesian methods. Embedded in an extreme value approach, asymptotic normality of the estimator is shown, and when the expert is clair-voyant, a simple combination formula can be deduced, bridging the classical statistical approach with the expert information. Following the aforementioned combination formula, a combination of quantile estimators can be naturally defined. In a simulation study, the estimator is shown to often outperform the Hill estimator for censored observations and recent Bayesian solutions, some of which require more information than usually available. Finally we perform a case study on a motor third-party liability insurance claim dataset, where Hill-type and quantile plots incorporate ultimate values into the estimation procedure in an intuitive manner.  相似文献   

17.
A new class of asymmetric loss functions derived from the least absolute deviations or least squares loss with a constraint on the mean of one tail of the residual error distribution, is introduced for analyzing financial data. Motivated by risk management principles, the primary intent is to provide “cautious” forecasts under uncertainty. The net effect on fitted models is to shape the residuals so that on average only a prespecified proportion of predictions tend to fall above or below a desired threshold. The loss functions are reformulated as objective functions in the context of parameter estimation for linear regression models, and it is demonstrated how optimization can be implemented via linear programming. The method is a competitor of quantile regression, but is more flexible and broader in scope. An application is illustrated on prediction of NDX and SPX index returns data, while controlling the magnitude of a fraction of worst losses.  相似文献   

18.
Abstract

Pet insurance in North America continues to be a growing industry. Unlike in Europe, where some countries have as much as 50% of the pet population insured, very few pets in North America are insured. Pricing practices in the past have relied on market share objectives more so than on actual experience. Pricing still continues to be performed on this basis with little consideration for actuarial principles and techniques. Developments of mortality and morbidity models to be used in the pricing model and new product development are essential for pet insurance. This paper examines insurance claims as experienced in the Canadian market. The time-to-event data are investigated using the Cox’s proportional hazards model. The claim number follows a nonhomogenous Poisson process with covariates. The claim size random variable is assumed to follow a lognormal distribution. These two models work well for aggregate claims with covariates. The first three central moments of the aggregate claims for one insured animal, as well as for a block of insured animals, are derived. We illustrate the models using data collected over an eight-year period.  相似文献   

19.
In this paper, models for claim frequency and average claim size in non-life insurance are considered. Both covariates and spatial random effects are included allowing the modelling of a spatial dependency pattern. We assume a Poisson model for the number of claims, while claim size is modelled using a Gamma distribution. However, in contrast to the usual compound Poisson model, we allow for dependencies between claim size and claim frequency. A fully Bayesian approach is followed, parameters are estimated using Markov Chain Monte Carlo (MCMC). The issue of model comparison is thoroughly addressed. Besides the deviance information criterion and the predictive model choice criterion, we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. We give an application to a comprehensive data set from a German car insurance company. The inclusion of spatial effects significantly improves the models for both claim frequency and claim size, and also leads to more accurate predictions of the total claim sizes. Further, we detect significant dependencies between the number of claims and claim size. Both spatial and number of claims effects are interpreted and quantified from an actuarial point of view.  相似文献   

20.
The Tweedie distribution, featured with a mass probability at zero, is a convenient tool for insurance claims modeling and pure premium determination in general insurance. Motivated by the fact that an insurance policy typically provides multiple types of coverage, we propose a copula-based multivariate Tweedie regression for modeling the semi-continuous claims while accommodating the association among different types. The proposed approach also allows for dispersion modeling, resulting in a multivariate version of the double generalized linear model. We demonstrate the application in insurance ratemaking using a portfolio of policyholders of automobile insurance from the state of Massachusetts in the United States.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号