首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, models for claim frequency and average claim size in non-life insurance are considered. Both covariates and spatial random effects are included allowing the modelling of a spatial dependency pattern. We assume a Poisson model for the number of claims, while claim size is modelled using a Gamma distribution. However, in contrast to the usual compound Poisson model, we allow for dependencies between claim size and claim frequency. A fully Bayesian approach is followed, parameters are estimated using Markov Chain Monte Carlo (MCMC). The issue of model comparison is thoroughly addressed. Besides the deviance information criterion and the predictive model choice criterion, we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. We give an application to a comprehensive data set from a German car insurance company. The inclusion of spatial effects significantly improves the models for both claim frequency and claim size, and also leads to more accurate predictions of the total claim sizes. Further, we detect significant dependencies between the number of claims and claim size. Both spatial and number of claims effects are interpreted and quantified from an actuarial point of view.  相似文献   

2.
Abstract

Pet insurance in North America continues to be a growing industry. Unlike in Europe, where some countries have as much as 50% of the pet population insured, very few pets in North America are insured. Pricing practices in the past have relied on market share objectives more so than on actual experience. Pricing still continues to be performed on this basis with little consideration for actuarial principles and techniques. Developments of mortality and morbidity models to be used in the pricing model and new product development are essential for pet insurance. This paper examines insurance claims as experienced in the Canadian market. The time-to-event data are investigated using the Cox’s proportional hazards model. The claim number follows a nonhomogenous Poisson process with covariates. The claim size random variable is assumed to follow a lognormal distribution. These two models work well for aggregate claims with covariates. The first three central moments of the aggregate claims for one insured animal, as well as for a block of insured animals, are derived. We illustrate the models using data collected over an eight-year period.  相似文献   

3.
We present a fully data driven strategy to incorporate continuous risk factors and geographical information in an insurance tariff. A framework is developed that aligns flexibility with the practical requirements of an insurance company, the policyholder and the regulator. Our strategy is illustrated with an example from property and casualty (P&C) insurance, namely a motor insurance case study. We start by fitting generalized additive models (GAMs) to the number of reported claims and their corresponding severity. These models allow for flexible statistical modeling in the presence of different types of risk factors: categorical, continuous, and spatial risk factors. The goal is to bin the continuous and spatial risk factors such that categorical risk factors result which captures the effect of the covariate on the response in an accurate way, while being easy to use in a generalized linear model (GLM). This is in line with the requirement of an insurance company to construct a practical and interpretable tariff that can be explained easily to stakeholders. We propose to bin the spatial risk factor using Fisher’s natural breaks algorithm and the continuous risk factors using evolutionary trees. GLMs are fitted to the claims data with the resulting categorical risk factors. We find that the resulting GLMs approximate the original GAMs closely, and lead to a very similar premium structure.  相似文献   

4.
Abstract

The purpose of this short note is to demonstrate the power of very straightforward branching process methods outside their traditional realm of application. We shall consider an insurance scheme where claims do not necessarily arise as a stationary process. Indeed, the number of policy-holders is changing so that each of them generates a random number of new insurants. Each one of these make claims of random size at random instants, independently but with the same distribution for different individuals. Premiums are supposed equal for all policy-holders. It is proved that there is, for an expanding portfolio, only one premium size which is fair in the sense that if the premium is larger than that, then the profit of the insurer grows infinite with time, whereas a smaller premium leads to his inevitable ruin. (Branching process models for the development of the portfolio may seem unrealistic. However, they do include the classical theory, where independent and identically distributed claims arise at the points of a renewal process.)  相似文献   

5.
Longitudinal modeling of insurance claim counts using jitters   总被引:1,自引:0,他引:1  
Modeling insurance claim counts is a critical component in the ratemaking process for property and casualty insurance. This article explores the usefulness of copulas to model the number of insurance claims for an individual policyholder within a longitudinal context. To address the limitations of copulas commonly attributed to multivariate discrete data, we adopt a ‘jittering’ method to the claim counts which has the effect of continuitizing the data. Elliptical copulas are proposed to accommodate the intertemporal nature of the ‘jittered’ claim counts and the unobservable subject-specific heterogeneity on the frequency of claims. Observable subject-specific effects are accounted in the model by using available covariate information through a regression model. The predictive distribution together with the corresponding credibility of claim frequency can be derived from the model for ratemaking and risk classification purposes. For empirical illustration, we analyze an unbalanced longitudinal dataset of claim counts observed from a portfolio of automobile insurance policies of a general insurer in Singapore. We further establish the validity of the calibrated copula model, and demonstrate that the copula with ‘jittering’ method outperforms standard count regression models.  相似文献   

6.

We propose a fully Bayesian approach to non-life risk premium rating, based on hierarchical models with latent variables for both claim frequency and claim size. Inference is based on the joint posterior distribution and is performed by Markov Chain Monte Carlo. Rather than plug-in point estimates of all unknown parameters, we take into account all sources of uncertainty simultaneously when the model is used to predict claims and estimate risk premiums. Several models are fitted to both a simulated dataset and a small portfolio regarding theft from cars. We show that interaction among latent variables can improve predictions significantly. We also investigate when interaction is not necessary. We compare our results with those obtained under a standard generalized linear model and show through numerical simulation that geographically located and spatially interacting latent variables can successfully compensate for missing covariates. However, when applied to the real portfolio data, the proposed models are not better than standard models due to the lack of spatial structure in the data.  相似文献   

7.
Abstract

In this paper we present a rating model for loss of profits insurance for a production system consisting of n production units. Explicit expressions for the company's long run expected average claims expenditures are derived. A numerical example is given.  相似文献   

8.
Abstract

This paper deals with the prediction of the amount of outstanding automobile claims that an insurance company will pay in the near future. We consider various competing models using Bayesian theory and Markov chain Monte Carlo methods. Claim counts are used to add a further hierarchical stage in the model with log-normally distributed claim amounts and its corresponding state space version. This way, we incorporate information from both the outstanding claim amounts and counts data resulting in new model formulations. Implementation details and illustrations with real insurance data are provided.  相似文献   

9.
Insurance claims data usually contain a large number of zeros and exhibits fat-tail behavior. Misestimation of one end of the tail impacts the other end of the tail of the claims distribution and can affect both the adequacy of premiums and needed reserves to hold. In addition, insured policyholders in a portfolio are naturally non-homogeneous. It is an ongoing challenge for actuaries to be able to build a predictive model that will simultaneously capture these peculiar characteristics of claims data and policyholder heterogeneity. Such models can help make improved predictions and thereby ease the decision-making process. This article proposes the use of spliced regression models for fitting insurance loss data. A primary advantage of spliced distributions is their flexibility to accommodate modeling different segments of the claims distribution with different parametric models. The threshold that breaks the segments is assumed to be a parameter, and this presents an additional challenge in the estimation. Our simulation study demonstrates the effectiveness of using multistage optimization for likelihood inference and at the same time the repercussions of model misspecification. For purposes of illustration, we consider three-component spliced regression models: the first component contains zeros, the second component models the middle segment of the loss data, and the third component models the tail segment of the loss data. We calibrate these proposed models and evaluate their performance using a Singapore auto insurance claims dataset. The estimation results show that the spliced regression model performs better than the Tweedie regression model in terms of tail fitting and prediction accuracy.  相似文献   

10.
Abstract

This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three worked illustrative examples, two of which feature actual insurance claims data.  相似文献   

11.
Abstract

This is the first of two papers which report on a solvency study. The study is based on statistical analyses of policy and claims data of a portfolio of single-family houses and dwellings. This paper deals mainly with analyses of fire, windstorm, and glass liabilities. Claim frequencies and claim size distributions are estimated, and the results are used to derive moments of the annual claim amounts and to provide examples of solvency margin requirements for different classes of business. The second paper is devoted to a broader discussion of solvency margin requirements in non-life insurance.  相似文献   

12.
Along with individual corporate management and regulatory matters for instance within EU regime Solvency II, Internal Risk Models are very important for major general insurance companies. Of crucial importance in building these internal models, is the stochastic modelling of large claims and in particular, the selection and parameterization of suitable probability distributions for the amount and number of claims based on empirical data. To this end, in practice, a visually-based methodology is more appropriate and workable than strict mathematical approaches. Based on a practical case study, this paper provides some insight into a visually-based methodology for internal risk models.  相似文献   

13.
In this paper we treat an individual’s health as a continuous variable, in contrast to the traditional literature on income insurance, where it is assumed that the individual is either able or unable to work. A continuous treatment of an individual’s health sheds new light on the role of income insurance and makes it possible to capture a number of real-world phenomena that are not easily captured in the traditional, dichotomous models. In particular, we show that moral hazard is not necessarily outright fraud, but a gradual adjustment of the willingness to work, depending on preferences and the conditions stated in the insurance contract. Further, the model can easily encompass phenomena such as administrative rejection of claims, and it clarifies the conditions for the desirability of insurance in the first place.  相似文献   

14.
Abstract

Credibility is a form of insurance pricing that is widely used, particularly in North America. The theory of credibility has been called a “cornerstone” in the field of actuarial science. Students of the North American actuarial bodies also study loss distributions, the process of statistical inference of relating a set of data to a theoretical (loss) distribution. In this work, we develop a direct link between credibility and loss distributions through the notion of a copula, a tool for understanding relationships among multivariate outcomes.

This paper develops credibility using a longitudinal data framework. In a longitudinal data framework, one might encounter data from a cross section of risk classes (towns) with a history of insurance claims available for each risk class. For the marginal claims distributions, we use generalized linear models, an extension of linear regression that also encompasses Weibull and Gamma regressions. Copulas are used to model the dependencies over time; specifically, this paper is the first to propose using a t-copula in the context of generalized linear models. The t-copula is the copula associated with the multivariate t-distribution; like the univariate tdistributions, it seems especially suitable for empirical work. Moreover, we show that the t-copula gives rise to easily computable predictive distributions that we use to generate credibility predictors. Like Bayesian methods, our copula credibility prediction methods allow us to provide an entire distribution of predicted claims, not just a point prediction.

We present an illustrative example of Massachusetts automobile claims, and compare our new credibility estimates with those currently existing in the literature.  相似文献   

15.
Abstract

Statistical extreme value theory provides a flexible and theoretically well motivated approach to the study of large losses in insurance. We give a brief review of the modem version of this theory and a “step by step” example of how to use it in large claims insurance. The discussion is based on a detailed investigation of a wind storm insurance problem. New results include a simulation study of estimators in the peaks over thresholds method with Generalised Pareto excesses, a discussion of Pareto and lognormal modelling and of methods to detect trends. Further results concern the use of meteorological information in the wind storm insurance and, of course, the results of the study of the wind storm claims.  相似文献   

16.
A model for the statistical analysis of the total amount of insurance paid out on a policy is developed and applied. The model simultaneously deals with the number of claims (zero or more) and the amount of each claim. The number of claims is from a Poisson-based discrete distribution. Individual claim sizes are from a continuous right skewed distribution. The resulting distribution of total claim size is a mixed discrete-continuous model, with positive probability of a zero claim. The means and dispersions of the claim frequency and claim size distribution are modeled in terms of risk factors. The model is applied to a car insurance data set.  相似文献   

17.
We investigate the predictive power of covariates extracted from telematics car driving data using the speed-acceleration heatmaps of Gao, G. & Wüthrich, M. V. [(2017). Feature extraction from telematics car driving heatmaps. SSRN ID: 3070069]. These telematics covariates include K-means classification, principal components, and bottleneck activations from a bottleneck neural network. In the conducted case study it turns out that the first principal component and the bottleneck activations give a better out-of-sample prediction for claims frequencies than other traditional pricing factors such as driver's age. Based on these numerical examples we recommend the use of these telematics covariates for car insurance pricing.  相似文献   

18.
Within a company's customer relationship management strategy, finding the customers most likely to leave is a central aspect. We present a dynamic modelling approach for predicting individual customers’ risk of leaving an insurance company. A logistic longitudinal regression model that incorporates time-dynamic explanatory variables and interactions is fitted to the data. As an intermediate step in the modelling procedure, we apply generalised additive models to identify non-linear relationships between the logit and the explanatory variables. Both out-of-sample and out-of-time prediction indicate that the model performs well in terms of identifying customers likely to leave the company each month. Our approach is general and may be applied to other industries as well.  相似文献   

19.
This article presents the reference mortality model K2004 approved by the Actuarial Society of Finland and the technique that was implemented in developing it. Initially, I will present the historical development of individual mortality rates in Finland. Then, the requirements posed for a modern mortality modelling will be presented. Reference mortality model K2004 is based on total population mortality rates, which were adjusted to correspond with that portion of the population that has a life insurance policy. First, the model presents a margin of the observed life insurance mortality rate in the total population with a Lee-Carter method together with a forecast, where the downward trend in mortality rates is expected to continue at the rate illustrated since the 1960s. Then, the mortality rate has been adjusted into life insurance mortality per age so that it corresponds to the differences observed between total population and the portion of population that has a life insurance during 1991–2001. Finally, a cohort and gender-specific functional margin will be presented to obtained data.  相似文献   

20.
Abstract

The use of clinical literature to set risk classification standards for life insurance underwriting stems from the need to set the most accurate standards using the best available information. A necessary hurdle in this process is converting any excess mortality observed in a clinical study to the appropriate rating for use in underwriting. A widely accepted model in the insurance industry, the Excess Death Rate model, treats the excess as additive to the conditional probability of death for an insurance company’s unimpaired class.

In this paper we test the validity of that model versus other common predictive models of excess mortality in an insured population. Applying these models to National Health and Nutrition Examination Survey (NHANES) data, we derive estimates for excess mortality from three commonly seen underwriting impairments in what could be considered a clinical population. These estimates are added to an estimate of an insurance company’s unimpaired mortality class and then used to predict deaths in an “insurable” subset of that clinical population.

The Excess Death Rate model performed the best of all models, having the smallest cumulative difference of actual to predicted deaths. The use of publicly available data, such as that in NHANES, could help bridge the gap between clinical literature and its application in insurance underwriting if insurable cohorts can be reliably identified from these generally healthy, ambulatory groups.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号